Mar 12 04:07:17.048765 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Mar 11 23:23:33 -00 2026 Mar 12 04:07:17.048819 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=0e4243d51ac00bffbb09a606c7378a821ca08f30dbebc6b82c4452fcc120d7bc Mar 12 04:07:17.048834 kernel: BIOS-provided physical RAM map: Mar 12 04:07:17.048850 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 12 04:07:17.048861 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 12 04:07:17.048871 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 12 04:07:17.048895 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Mar 12 04:07:17.048906 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Mar 12 04:07:17.048916 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 12 04:07:17.048927 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 12 04:07:17.048937 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 12 04:07:17.048947 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 12 04:07:17.050131 kernel: NX (Execute Disable) protection: active Mar 12 04:07:17.050150 kernel: APIC: Static calls initialized Mar 12 04:07:17.050163 kernel: SMBIOS 2.8 present. Mar 12 04:07:17.050176 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Mar 12 04:07:17.050187 kernel: Hypervisor detected: KVM Mar 12 04:07:17.050206 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 12 04:07:17.050218 kernel: kvm-clock: using sched offset of 4414219077 cycles Mar 12 04:07:17.050230 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 12 04:07:17.050242 kernel: tsc: Detected 2499.998 MHz processor Mar 12 04:07:17.050253 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 12 04:07:17.050265 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 12 04:07:17.050277 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Mar 12 04:07:17.050288 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 12 04:07:17.050300 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 12 04:07:17.050317 kernel: Using GB pages for direct mapping Mar 12 04:07:17.050328 kernel: ACPI: Early table checksum verification disabled Mar 12 04:07:17.050340 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Mar 12 04:07:17.050351 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 04:07:17.050362 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 04:07:17.050374 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 04:07:17.050385 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Mar 12 04:07:17.050396 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 04:07:17.050408 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 04:07:17.050424 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 04:07:17.050436 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 04:07:17.050447 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Mar 12 04:07:17.050459 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Mar 12 04:07:17.050471 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Mar 12 04:07:17.050489 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Mar 12 04:07:17.050501 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Mar 12 04:07:17.050518 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Mar 12 04:07:17.050530 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Mar 12 04:07:17.050542 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 12 04:07:17.050554 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 12 04:07:17.050566 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Mar 12 04:07:17.050577 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Mar 12 04:07:17.050589 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Mar 12 04:07:17.050601 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Mar 12 04:07:17.050618 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Mar 12 04:07:17.050630 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Mar 12 04:07:17.050642 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Mar 12 04:07:17.050653 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Mar 12 04:07:17.050665 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Mar 12 04:07:17.050677 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Mar 12 04:07:17.050688 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Mar 12 04:07:17.050700 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Mar 12 04:07:17.050712 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Mar 12 04:07:17.050729 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Mar 12 04:07:17.050741 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Mar 12 04:07:17.050753 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Mar 12 04:07:17.050765 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Mar 12 04:07:17.050778 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Mar 12 04:07:17.050790 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Mar 12 04:07:17.050802 kernel: Zone ranges: Mar 12 04:07:17.050814 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 12 04:07:17.050826 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Mar 12 04:07:17.050843 kernel: Normal empty Mar 12 04:07:17.050855 kernel: Movable zone start for each node Mar 12 04:07:17.050867 kernel: Early memory node ranges Mar 12 04:07:17.050893 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 12 04:07:17.050905 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Mar 12 04:07:17.050917 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Mar 12 04:07:17.050929 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 12 04:07:17.050941 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 12 04:07:17.050953 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Mar 12 04:07:17.050981 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 12 04:07:17.051001 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 12 04:07:17.051013 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 12 04:07:17.051025 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 12 04:07:17.051037 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 12 04:07:17.051049 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 12 04:07:17.051061 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 12 04:07:17.051073 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 12 04:07:17.051085 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 12 04:07:17.051096 kernel: TSC deadline timer available Mar 12 04:07:17.051114 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Mar 12 04:07:17.051126 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 12 04:07:17.051142 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 12 04:07:17.051154 kernel: Booting paravirtualized kernel on KVM Mar 12 04:07:17.051166 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 12 04:07:17.051178 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Mar 12 04:07:17.051190 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u262144 Mar 12 04:07:17.051202 kernel: pcpu-alloc: s196328 r8192 d28952 u262144 alloc=1*2097152 Mar 12 04:07:17.051214 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Mar 12 04:07:17.051231 kernel: kvm-guest: PV spinlocks enabled Mar 12 04:07:17.051243 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 12 04:07:17.051257 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=0e4243d51ac00bffbb09a606c7378a821ca08f30dbebc6b82c4452fcc120d7bc Mar 12 04:07:17.051269 kernel: random: crng init done Mar 12 04:07:17.051281 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 12 04:07:17.051293 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 12 04:07:17.051305 kernel: Fallback order for Node 0: 0 Mar 12 04:07:17.051317 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Mar 12 04:07:17.051334 kernel: Policy zone: DMA32 Mar 12 04:07:17.051346 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 12 04:07:17.051358 kernel: software IO TLB: area num 16. Mar 12 04:07:17.051370 kernel: Memory: 1901592K/2096616K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 194764K reserved, 0K cma-reserved) Mar 12 04:07:17.051382 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Mar 12 04:07:17.051394 kernel: Kernel/User page tables isolation: enabled Mar 12 04:07:17.051406 kernel: ftrace: allocating 37996 entries in 149 pages Mar 12 04:07:17.051418 kernel: ftrace: allocated 149 pages with 4 groups Mar 12 04:07:17.051430 kernel: Dynamic Preempt: voluntary Mar 12 04:07:17.051447 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 12 04:07:17.051459 kernel: rcu: RCU event tracing is enabled. Mar 12 04:07:17.051471 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Mar 12 04:07:17.051483 kernel: Trampoline variant of Tasks RCU enabled. Mar 12 04:07:17.051496 kernel: Rude variant of Tasks RCU enabled. Mar 12 04:07:17.051520 kernel: Tracing variant of Tasks RCU enabled. Mar 12 04:07:17.051538 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 12 04:07:17.051551 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Mar 12 04:07:17.051563 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Mar 12 04:07:17.051576 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 12 04:07:17.051588 kernel: Console: colour VGA+ 80x25 Mar 12 04:07:17.051601 kernel: printk: console [tty0] enabled Mar 12 04:07:17.051619 kernel: printk: console [ttyS0] enabled Mar 12 04:07:17.051632 kernel: ACPI: Core revision 20230628 Mar 12 04:07:17.051644 kernel: APIC: Switch to symmetric I/O mode setup Mar 12 04:07:17.051657 kernel: x2apic enabled Mar 12 04:07:17.051669 kernel: APIC: Switched APIC routing to: physical x2apic Mar 12 04:07:17.051687 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Mar 12 04:07:17.051700 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Mar 12 04:07:17.051713 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 12 04:07:17.051725 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Mar 12 04:07:17.051738 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Mar 12 04:07:17.051750 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 12 04:07:17.051762 kernel: Spectre V2 : Mitigation: Retpolines Mar 12 04:07:17.051775 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 12 04:07:17.051787 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Mar 12 04:07:17.051800 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 12 04:07:17.051818 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 12 04:07:17.051830 kernel: MDS: Mitigation: Clear CPU buffers Mar 12 04:07:17.051842 kernel: MMIO Stale Data: Unknown: No mitigations Mar 12 04:07:17.051855 kernel: SRBDS: Unknown: Dependent on hypervisor status Mar 12 04:07:17.051867 kernel: active return thunk: its_return_thunk Mar 12 04:07:17.051893 kernel: ITS: Mitigation: Aligned branch/return thunks Mar 12 04:07:17.051907 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 12 04:07:17.051919 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 12 04:07:17.051932 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 12 04:07:17.051944 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 12 04:07:17.054754 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Mar 12 04:07:17.054788 kernel: Freeing SMP alternatives memory: 32K Mar 12 04:07:17.054800 kernel: pid_max: default: 32768 minimum: 301 Mar 12 04:07:17.054813 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 12 04:07:17.054826 kernel: landlock: Up and running. Mar 12 04:07:17.054838 kernel: SELinux: Initializing. Mar 12 04:07:17.054851 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 12 04:07:17.054863 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 12 04:07:17.054889 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Mar 12 04:07:17.054904 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Mar 12 04:07:17.054917 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Mar 12 04:07:17.054938 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Mar 12 04:07:17.054951 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Mar 12 04:07:17.054984 kernel: signal: max sigframe size: 1776 Mar 12 04:07:17.054999 kernel: rcu: Hierarchical SRCU implementation. Mar 12 04:07:17.055012 kernel: rcu: Max phase no-delay instances is 400. Mar 12 04:07:17.055025 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 12 04:07:17.055038 kernel: smp: Bringing up secondary CPUs ... Mar 12 04:07:17.055051 kernel: smpboot: x86: Booting SMP configuration: Mar 12 04:07:17.055063 kernel: .... node #0, CPUs: #1 Mar 12 04:07:17.055083 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Mar 12 04:07:17.055096 kernel: smp: Brought up 1 node, 2 CPUs Mar 12 04:07:17.055109 kernel: smpboot: Max logical packages: 16 Mar 12 04:07:17.055121 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Mar 12 04:07:17.055134 kernel: devtmpfs: initialized Mar 12 04:07:17.055147 kernel: x86/mm: Memory block size: 128MB Mar 12 04:07:17.055160 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 12 04:07:17.055173 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Mar 12 04:07:17.055185 kernel: pinctrl core: initialized pinctrl subsystem Mar 12 04:07:17.055203 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 12 04:07:17.055216 kernel: audit: initializing netlink subsys (disabled) Mar 12 04:07:17.055229 kernel: audit: type=2000 audit(1773288435.420:1): state=initialized audit_enabled=0 res=1 Mar 12 04:07:17.055242 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 12 04:07:17.055254 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 12 04:07:17.055267 kernel: cpuidle: using governor menu Mar 12 04:07:17.055280 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 12 04:07:17.055292 kernel: dca service started, version 1.12.1 Mar 12 04:07:17.055305 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 12 04:07:17.055323 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 12 04:07:17.055336 kernel: PCI: Using configuration type 1 for base access Mar 12 04:07:17.055349 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 12 04:07:17.055362 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 12 04:07:17.055374 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 12 04:07:17.055387 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 12 04:07:17.055400 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 12 04:07:17.055413 kernel: ACPI: Added _OSI(Module Device) Mar 12 04:07:17.055426 kernel: ACPI: Added _OSI(Processor Device) Mar 12 04:07:17.055444 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 12 04:07:17.055457 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 12 04:07:17.055469 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 12 04:07:17.055482 kernel: ACPI: Interpreter enabled Mar 12 04:07:17.055494 kernel: ACPI: PM: (supports S0 S5) Mar 12 04:07:17.055507 kernel: ACPI: Using IOAPIC for interrupt routing Mar 12 04:07:17.055520 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 12 04:07:17.055532 kernel: PCI: Using E820 reservations for host bridge windows Mar 12 04:07:17.055545 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 12 04:07:17.055563 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 12 04:07:17.055872 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 12 04:07:17.056109 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 12 04:07:17.056294 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 12 04:07:17.056313 kernel: PCI host bridge to bus 0000:00 Mar 12 04:07:17.056520 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 12 04:07:17.056702 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 12 04:07:17.056898 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 12 04:07:17.060152 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Mar 12 04:07:17.060326 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 12 04:07:17.060490 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Mar 12 04:07:17.060653 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 12 04:07:17.060869 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 12 04:07:17.062225 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Mar 12 04:07:17.062414 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Mar 12 04:07:17.062595 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Mar 12 04:07:17.062774 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Mar 12 04:07:17.065116 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 12 04:07:17.065334 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Mar 12 04:07:17.065522 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Mar 12 04:07:17.065732 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Mar 12 04:07:17.065930 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Mar 12 04:07:17.066185 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Mar 12 04:07:17.066386 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Mar 12 04:07:17.066616 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Mar 12 04:07:17.066813 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Mar 12 04:07:17.067426 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Mar 12 04:07:17.067617 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Mar 12 04:07:17.067808 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Mar 12 04:07:17.068026 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Mar 12 04:07:17.068219 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Mar 12 04:07:17.068402 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Mar 12 04:07:17.068606 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Mar 12 04:07:17.068787 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Mar 12 04:07:17.071053 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Mar 12 04:07:17.071250 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 12 04:07:17.071432 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Mar 12 04:07:17.071610 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Mar 12 04:07:17.071790 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Mar 12 04:07:17.072027 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Mar 12 04:07:17.072212 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Mar 12 04:07:17.072390 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Mar 12 04:07:17.072567 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Mar 12 04:07:17.072757 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 12 04:07:17.072953 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 12 04:07:17.075245 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 12 04:07:17.075446 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Mar 12 04:07:17.075630 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Mar 12 04:07:17.075832 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 12 04:07:17.078079 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 12 04:07:17.078293 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Mar 12 04:07:17.078486 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Mar 12 04:07:17.078684 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Mar 12 04:07:17.078865 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Mar 12 04:07:17.079132 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Mar 12 04:07:17.079327 kernel: pci_bus 0000:02: extended config space not accessible Mar 12 04:07:17.079534 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Mar 12 04:07:17.079726 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Mar 12 04:07:17.079943 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Mar 12 04:07:17.082184 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Mar 12 04:07:17.082396 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Mar 12 04:07:17.082588 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Mar 12 04:07:17.082775 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Mar 12 04:07:17.083012 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Mar 12 04:07:17.083197 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 12 04:07:17.083396 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Mar 12 04:07:17.083592 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Mar 12 04:07:17.083774 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Mar 12 04:07:17.085996 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Mar 12 04:07:17.086205 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 12 04:07:17.086405 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Mar 12 04:07:17.086594 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Mar 12 04:07:17.086780 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 12 04:07:17.087045 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Mar 12 04:07:17.087230 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Mar 12 04:07:17.087408 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 12 04:07:17.087592 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Mar 12 04:07:17.087771 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Mar 12 04:07:17.087984 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 12 04:07:17.090082 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Mar 12 04:07:17.090273 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Mar 12 04:07:17.090465 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 12 04:07:17.090650 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Mar 12 04:07:17.090828 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Mar 12 04:07:17.092079 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 12 04:07:17.092102 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 12 04:07:17.092117 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 12 04:07:17.092130 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 12 04:07:17.092143 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 12 04:07:17.092156 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 12 04:07:17.092177 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 12 04:07:17.092190 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 12 04:07:17.092203 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 12 04:07:17.092216 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 12 04:07:17.092228 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 12 04:07:17.092241 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 12 04:07:17.092254 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 12 04:07:17.092267 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 12 04:07:17.092280 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 12 04:07:17.092298 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 12 04:07:17.092311 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 12 04:07:17.092324 kernel: iommu: Default domain type: Translated Mar 12 04:07:17.092337 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 12 04:07:17.092349 kernel: PCI: Using ACPI for IRQ routing Mar 12 04:07:17.092362 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 12 04:07:17.092375 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 12 04:07:17.092387 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Mar 12 04:07:17.092575 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 12 04:07:17.092756 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 12 04:07:17.092953 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 12 04:07:17.093997 kernel: vgaarb: loaded Mar 12 04:07:17.094013 kernel: clocksource: Switched to clocksource kvm-clock Mar 12 04:07:17.094026 kernel: VFS: Disk quotas dquot_6.6.0 Mar 12 04:07:17.094039 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 12 04:07:17.094052 kernel: pnp: PnP ACPI init Mar 12 04:07:17.094258 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 12 04:07:17.094288 kernel: pnp: PnP ACPI: found 5 devices Mar 12 04:07:17.094301 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 12 04:07:17.094315 kernel: NET: Registered PF_INET protocol family Mar 12 04:07:17.094328 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 12 04:07:17.094341 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 12 04:07:17.094355 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 12 04:07:17.094367 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 12 04:07:17.094380 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Mar 12 04:07:17.094399 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 12 04:07:17.094412 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 12 04:07:17.094425 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 12 04:07:17.094438 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 12 04:07:17.094451 kernel: NET: Registered PF_XDP protocol family Mar 12 04:07:17.094633 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Mar 12 04:07:17.094815 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Mar 12 04:07:17.096095 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Mar 12 04:07:17.096295 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Mar 12 04:07:17.096477 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Mar 12 04:07:17.096658 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Mar 12 04:07:17.096838 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Mar 12 04:07:17.101499 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Mar 12 04:07:17.101685 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Mar 12 04:07:17.101887 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Mar 12 04:07:17.102114 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Mar 12 04:07:17.102294 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Mar 12 04:07:17.102473 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Mar 12 04:07:17.102652 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Mar 12 04:07:17.102829 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Mar 12 04:07:17.103037 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Mar 12 04:07:17.103227 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Mar 12 04:07:17.103463 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Mar 12 04:07:17.103661 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Mar 12 04:07:17.103857 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Mar 12 04:07:17.104096 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Mar 12 04:07:17.104297 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Mar 12 04:07:17.104494 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Mar 12 04:07:17.104692 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Mar 12 04:07:17.104906 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Mar 12 04:07:17.105118 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 12 04:07:17.105297 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Mar 12 04:07:17.105485 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Mar 12 04:07:17.105698 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Mar 12 04:07:17.105914 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 12 04:07:17.107185 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Mar 12 04:07:17.107378 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Mar 12 04:07:17.107556 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Mar 12 04:07:17.107734 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 12 04:07:17.107932 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Mar 12 04:07:17.109346 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Mar 12 04:07:17.109534 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Mar 12 04:07:17.109716 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 12 04:07:17.109913 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Mar 12 04:07:17.112431 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Mar 12 04:07:17.112626 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Mar 12 04:07:17.112805 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 12 04:07:17.113025 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Mar 12 04:07:17.113207 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Mar 12 04:07:17.113384 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Mar 12 04:07:17.113571 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 12 04:07:17.113751 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Mar 12 04:07:17.113946 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Mar 12 04:07:17.115164 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Mar 12 04:07:17.115349 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 12 04:07:17.115518 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 12 04:07:17.115684 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 12 04:07:17.115848 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 12 04:07:17.116049 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Mar 12 04:07:17.116221 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 12 04:07:17.116381 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Mar 12 04:07:17.116563 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Mar 12 04:07:17.116733 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Mar 12 04:07:17.116921 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Mar 12 04:07:17.119136 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Mar 12 04:07:17.119323 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Mar 12 04:07:17.119505 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Mar 12 04:07:17.119675 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 12 04:07:17.119853 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Mar 12 04:07:17.122083 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Mar 12 04:07:17.122257 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 12 04:07:17.122439 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Mar 12 04:07:17.122619 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Mar 12 04:07:17.122787 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 12 04:07:17.123015 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Mar 12 04:07:17.123188 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Mar 12 04:07:17.123356 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 12 04:07:17.123534 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Mar 12 04:07:17.123705 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Mar 12 04:07:17.123896 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 12 04:07:17.124098 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Mar 12 04:07:17.124270 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Mar 12 04:07:17.124439 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 12 04:07:17.124620 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Mar 12 04:07:17.124791 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Mar 12 04:07:17.127010 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 12 04:07:17.127042 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 12 04:07:17.127057 kernel: PCI: CLS 0 bytes, default 64 Mar 12 04:07:17.127071 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 12 04:07:17.127085 kernel: software IO TLB: mapped [mem 0x0000000073000000-0x0000000077000000] (64MB) Mar 12 04:07:17.127099 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 12 04:07:17.127112 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Mar 12 04:07:17.127126 kernel: Initialise system trusted keyrings Mar 12 04:07:17.127140 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 12 04:07:17.127159 kernel: Key type asymmetric registered Mar 12 04:07:17.127172 kernel: Asymmetric key parser 'x509' registered Mar 12 04:07:17.127186 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 12 04:07:17.127199 kernel: io scheduler mq-deadline registered Mar 12 04:07:17.127213 kernel: io scheduler kyber registered Mar 12 04:07:17.127226 kernel: io scheduler bfq registered Mar 12 04:07:17.127414 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Mar 12 04:07:17.127599 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Mar 12 04:07:17.127781 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 12 04:07:17.128008 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Mar 12 04:07:17.128190 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Mar 12 04:07:17.128372 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 12 04:07:17.128572 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Mar 12 04:07:17.128756 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Mar 12 04:07:17.128954 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 12 04:07:17.130644 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Mar 12 04:07:17.130826 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Mar 12 04:07:17.131060 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 12 04:07:17.131253 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Mar 12 04:07:17.131433 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Mar 12 04:07:17.131611 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 12 04:07:17.131801 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Mar 12 04:07:17.134049 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Mar 12 04:07:17.134246 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 12 04:07:17.134433 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Mar 12 04:07:17.134614 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Mar 12 04:07:17.134795 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 12 04:07:17.135028 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Mar 12 04:07:17.135210 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Mar 12 04:07:17.135391 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 12 04:07:17.135413 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 12 04:07:17.135428 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 12 04:07:17.135442 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 12 04:07:17.135464 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 12 04:07:17.135478 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 12 04:07:17.135492 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 12 04:07:17.135505 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 12 04:07:17.135518 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 12 04:07:17.135720 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 12 04:07:17.135743 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 12 04:07:17.135921 kernel: rtc_cmos 00:03: registered as rtc0 Mar 12 04:07:17.137356 kernel: rtc_cmos 00:03: setting system clock to 2026-03-12T04:07:16 UTC (1773288436) Mar 12 04:07:17.137534 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Mar 12 04:07:17.137556 kernel: intel_pstate: CPU model not supported Mar 12 04:07:17.137570 kernel: NET: Registered PF_INET6 protocol family Mar 12 04:07:17.137584 kernel: Segment Routing with IPv6 Mar 12 04:07:17.137598 kernel: In-situ OAM (IOAM) with IPv6 Mar 12 04:07:17.137612 kernel: NET: Registered PF_PACKET protocol family Mar 12 04:07:17.137625 kernel: Key type dns_resolver registered Mar 12 04:07:17.137639 kernel: IPI shorthand broadcast: enabled Mar 12 04:07:17.137661 kernel: sched_clock: Marking stable (1438004398, 229454864)->(1792332088, -124872826) Mar 12 04:07:17.137675 kernel: registered taskstats version 1 Mar 12 04:07:17.137688 kernel: Loading compiled-in X.509 certificates Mar 12 04:07:17.137702 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 67287262975845098ef9f337a0e8baa9afd38510' Mar 12 04:07:17.137715 kernel: Key type .fscrypt registered Mar 12 04:07:17.137728 kernel: Key type fscrypt-provisioning registered Mar 12 04:07:17.137741 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 12 04:07:17.137754 kernel: ima: Allocated hash algorithm: sha1 Mar 12 04:07:17.137768 kernel: ima: No architecture policies found Mar 12 04:07:17.137787 kernel: clk: Disabling unused clocks Mar 12 04:07:17.137801 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 12 04:07:17.137814 kernel: Write protecting the kernel read-only data: 36864k Mar 12 04:07:17.137828 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 12 04:07:17.137842 kernel: Run /init as init process Mar 12 04:07:17.137855 kernel: with arguments: Mar 12 04:07:17.137868 kernel: /init Mar 12 04:07:17.137896 kernel: with environment: Mar 12 04:07:17.137909 kernel: HOME=/ Mar 12 04:07:17.137922 kernel: TERM=linux Mar 12 04:07:17.137953 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 12 04:07:17.137996 systemd[1]: Detected virtualization kvm. Mar 12 04:07:17.138012 systemd[1]: Detected architecture x86-64. Mar 12 04:07:17.138026 systemd[1]: Running in initrd. Mar 12 04:07:17.138040 systemd[1]: No hostname configured, using default hostname. Mar 12 04:07:17.138054 systemd[1]: Hostname set to . Mar 12 04:07:17.138069 systemd[1]: Initializing machine ID from VM UUID. Mar 12 04:07:17.138090 systemd[1]: Queued start job for default target initrd.target. Mar 12 04:07:17.138110 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 04:07:17.138125 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 04:07:17.138140 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 12 04:07:17.138154 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 12 04:07:17.138169 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 12 04:07:17.138183 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 12 04:07:17.138205 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 12 04:07:17.138220 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 12 04:07:17.138235 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 04:07:17.138249 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 12 04:07:17.138263 systemd[1]: Reached target paths.target - Path Units. Mar 12 04:07:17.138277 systemd[1]: Reached target slices.target - Slice Units. Mar 12 04:07:17.138292 systemd[1]: Reached target swap.target - Swaps. Mar 12 04:07:17.138306 systemd[1]: Reached target timers.target - Timer Units. Mar 12 04:07:17.138326 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 12 04:07:17.138340 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 12 04:07:17.138355 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 12 04:07:17.138369 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 12 04:07:17.138383 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 12 04:07:17.138398 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 12 04:07:17.138412 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 04:07:17.138426 systemd[1]: Reached target sockets.target - Socket Units. Mar 12 04:07:17.138446 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 12 04:07:17.138460 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 12 04:07:17.138474 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 12 04:07:17.138489 systemd[1]: Starting systemd-fsck-usr.service... Mar 12 04:07:17.138504 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 12 04:07:17.138518 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 12 04:07:17.138532 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 04:07:17.138589 systemd-journald[203]: Collecting audit messages is disabled. Mar 12 04:07:17.138628 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 12 04:07:17.138643 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 04:07:17.138657 systemd[1]: Finished systemd-fsck-usr.service. Mar 12 04:07:17.138678 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 12 04:07:17.138693 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 12 04:07:17.138708 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 12 04:07:17.138721 kernel: Bridge firewalling registered Mar 12 04:07:17.138736 systemd-journald[203]: Journal started Mar 12 04:07:17.138767 systemd-journald[203]: Runtime Journal (/run/log/journal/95b33508410b4dec8448333af5c647e0) is 4.7M, max 38.0M, 33.2M free. Mar 12 04:07:17.069031 systemd-modules-load[204]: Inserted module 'overlay' Mar 12 04:07:17.158100 systemd[1]: Started systemd-journald.service - Journal Service. Mar 12 04:07:17.136536 systemd-modules-load[204]: Inserted module 'br_netfilter' Mar 12 04:07:17.160345 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 12 04:07:17.161369 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 04:07:17.178313 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 12 04:07:17.182149 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 12 04:07:17.185180 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 12 04:07:17.188833 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 12 04:07:17.207272 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 12 04:07:17.216253 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 04:07:17.218457 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 04:07:17.220319 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 04:07:17.229235 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 12 04:07:17.234180 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 12 04:07:17.252573 dracut-cmdline[238]: dracut-dracut-053 Mar 12 04:07:17.263230 dracut-cmdline[238]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=0e4243d51ac00bffbb09a606c7378a821ca08f30dbebc6b82c4452fcc120d7bc Mar 12 04:07:17.284108 systemd-resolved[240]: Positive Trust Anchors: Mar 12 04:07:17.284141 systemd-resolved[240]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 12 04:07:17.284186 systemd-resolved[240]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 12 04:07:17.290215 systemd-resolved[240]: Defaulting to hostname 'linux'. Mar 12 04:07:17.292923 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 12 04:07:17.294478 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 12 04:07:17.380997 kernel: SCSI subsystem initialized Mar 12 04:07:17.392014 kernel: Loading iSCSI transport class v2.0-870. Mar 12 04:07:17.406042 kernel: iscsi: registered transport (tcp) Mar 12 04:07:17.432543 kernel: iscsi: registered transport (qla4xxx) Mar 12 04:07:17.432637 kernel: QLogic iSCSI HBA Driver Mar 12 04:07:17.488382 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 12 04:07:17.505203 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 12 04:07:17.538685 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 12 04:07:17.539042 kernel: device-mapper: uevent: version 1.0.3 Mar 12 04:07:17.539068 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 12 04:07:17.591004 kernel: raid6: sse2x4 gen() 14025 MB/s Mar 12 04:07:17.609016 kernel: raid6: sse2x2 gen() 9703 MB/s Mar 12 04:07:17.627664 kernel: raid6: sse2x1 gen() 10450 MB/s Mar 12 04:07:17.627752 kernel: raid6: using algorithm sse2x4 gen() 14025 MB/s Mar 12 04:07:17.646705 kernel: raid6: .... xor() 7781 MB/s, rmw enabled Mar 12 04:07:17.646802 kernel: raid6: using ssse3x2 recovery algorithm Mar 12 04:07:17.673002 kernel: xor: automatically using best checksumming function avx Mar 12 04:07:17.866000 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 12 04:07:17.880986 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 12 04:07:17.888279 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 04:07:17.912382 systemd-udevd[423]: Using default interface naming scheme 'v255'. Mar 12 04:07:17.920032 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 04:07:17.928293 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 12 04:07:17.952206 dracut-pre-trigger[429]: rd.md=0: removing MD RAID activation Mar 12 04:07:17.994063 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 12 04:07:18.001157 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 12 04:07:18.117763 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 04:07:18.126777 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 12 04:07:18.154391 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 12 04:07:18.158356 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 12 04:07:18.159147 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 04:07:18.161867 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 12 04:07:18.170169 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 12 04:07:18.198648 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 12 04:07:18.247079 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Mar 12 04:07:18.267161 kernel: cryptd: max_cpu_qlen set to 1000 Mar 12 04:07:18.267222 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Mar 12 04:07:18.284882 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 12 04:07:18.284947 kernel: GPT:17805311 != 125829119 Mar 12 04:07:18.285003 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 12 04:07:18.285024 kernel: GPT:17805311 != 125829119 Mar 12 04:07:18.285041 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 12 04:07:18.285058 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 04:07:18.308037 kernel: libata version 3.00 loaded. Mar 12 04:07:18.309887 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 12 04:07:18.311110 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 04:07:18.313526 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 12 04:07:18.314595 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 04:07:18.320693 kernel: AVX version of gcm_enc/dec engaged. Mar 12 04:07:18.320724 kernel: AES CTR mode by8 optimization enabled Mar 12 04:07:18.320743 kernel: ahci 0000:00:1f.2: version 3.0 Mar 12 04:07:18.321062 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 12 04:07:18.316253 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 04:07:18.333287 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 12 04:07:18.335785 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 12 04:07:18.336034 kernel: scsi host0: ahci Mar 12 04:07:18.337646 kernel: scsi host1: ahci Mar 12 04:07:18.337911 kernel: scsi host2: ahci Mar 12 04:07:18.317917 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 04:07:18.344274 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 04:07:18.353463 kernel: scsi host3: ahci Mar 12 04:07:18.355216 kernel: scsi host4: ahci Mar 12 04:07:18.355472 kernel: scsi host5: ahci Mar 12 04:07:18.359990 kernel: ACPI: bus type USB registered Mar 12 04:07:18.366985 kernel: usbcore: registered new interface driver usbfs Mar 12 04:07:18.367025 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Mar 12 04:07:18.369904 kernel: usbcore: registered new interface driver hub Mar 12 04:07:18.369942 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Mar 12 04:07:18.369961 kernel: usbcore: registered new device driver usb Mar 12 04:07:18.369994 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Mar 12 04:07:18.376319 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Mar 12 04:07:18.378324 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Mar 12 04:07:18.387175 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Mar 12 04:07:18.432314 kernel: BTRFS: device fsid 94537345-7f6b-4b2a-965f-248bd6f0b7eb devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (471) Mar 12 04:07:18.447513 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 12 04:07:18.449393 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 12 04:07:18.536651 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 12 04:07:18.629250 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (476) Mar 12 04:07:18.629288 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 04:07:18.629449 disk-uuid[558]: Primary Header is updated. Mar 12 04:07:18.629449 disk-uuid[558]: Secondary Entries is updated. Mar 12 04:07:18.629449 disk-uuid[558]: Secondary Header is updated. Mar 12 04:07:18.633031 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 04:07:18.651990 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 12 04:07:18.670198 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 12 04:07:18.703765 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 12 04:07:18.703826 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 12 04:07:18.707733 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 12 04:07:18.707768 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 12 04:07:18.707982 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 12 04:07:18.710006 kernel: ata3: SATA link down (SStatus 0 SControl 300) Mar 12 04:07:18.716981 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 12 04:07:18.720030 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 04:07:18.740999 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Mar 12 04:07:18.739953 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 12 04:07:18.751997 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Mar 12 04:07:18.752276 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Mar 12 04:07:18.757203 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Mar 12 04:07:18.757474 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Mar 12 04:07:18.758259 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Mar 12 04:07:18.760294 kernel: hub 1-0:1.0: USB hub found Mar 12 04:07:18.762243 kernel: hub 1-0:1.0: 4 ports detected Mar 12 04:07:18.763501 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Mar 12 04:07:18.766294 kernel: hub 2-0:1.0: USB hub found Mar 12 04:07:18.766609 kernel: hub 2-0:1.0: 4 ports detected Mar 12 04:07:19.002045 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Mar 12 04:07:19.143086 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 12 04:07:19.149118 kernel: usbcore: registered new interface driver usbhid Mar 12 04:07:19.149164 kernel: usbhid: USB HID core driver Mar 12 04:07:19.159056 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Mar 12 04:07:19.159108 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Mar 12 04:07:19.565034 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 04:07:19.566760 disk-uuid[560]: The operation has completed successfully. Mar 12 04:07:19.619219 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 12 04:07:19.619410 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 12 04:07:19.645213 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 12 04:07:19.650060 sh[587]: Success Mar 12 04:07:19.668754 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Mar 12 04:07:19.738450 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 12 04:07:19.741095 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 12 04:07:19.743179 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 12 04:07:19.765215 kernel: BTRFS info (device dm-0): first mount of filesystem 94537345-7f6b-4b2a-965f-248bd6f0b7eb Mar 12 04:07:19.769008 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 12 04:07:19.769052 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 12 04:07:19.770349 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 12 04:07:19.772028 kernel: BTRFS info (device dm-0): using free space tree Mar 12 04:07:19.784103 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 12 04:07:19.785597 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 12 04:07:19.799260 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 12 04:07:19.802148 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 12 04:07:19.822242 kernel: BTRFS info (device vda6): first mount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 04:07:19.822313 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 12 04:07:19.822336 kernel: BTRFS info (device vda6): using free space tree Mar 12 04:07:19.826986 kernel: BTRFS info (device vda6): auto enabling async discard Mar 12 04:07:19.841058 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 12 04:07:19.843461 kernel: BTRFS info (device vda6): last unmount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 04:07:19.852505 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 12 04:07:19.861236 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 12 04:07:19.950531 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 12 04:07:19.970278 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 12 04:07:20.003289 systemd-networkd[770]: lo: Link UP Mar 12 04:07:20.003302 systemd-networkd[770]: lo: Gained carrier Mar 12 04:07:20.005611 systemd-networkd[770]: Enumeration completed Mar 12 04:07:20.005750 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 12 04:07:20.007469 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 04:07:20.007475 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 12 04:07:20.008615 systemd-networkd[770]: eth0: Link UP Mar 12 04:07:20.008621 systemd-networkd[770]: eth0: Gained carrier Mar 12 04:07:20.008632 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 04:07:20.009099 systemd[1]: Reached target network.target - Network. Mar 12 04:07:20.136118 systemd-networkd[770]: eth0: DHCPv4 address 10.244.26.218/30, gateway 10.244.26.217 acquired from 10.244.26.217 Mar 12 04:07:20.150167 ignition[687]: Ignition 2.19.0 Mar 12 04:07:20.150200 ignition[687]: Stage: fetch-offline Mar 12 04:07:20.152648 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 12 04:07:20.150286 ignition[687]: no configs at "/usr/lib/ignition/base.d" Mar 12 04:07:20.150313 ignition[687]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 12 04:07:20.150524 ignition[687]: parsed url from cmdline: "" Mar 12 04:07:20.150531 ignition[687]: no config URL provided Mar 12 04:07:20.150550 ignition[687]: reading system config file "/usr/lib/ignition/user.ign" Mar 12 04:07:20.150569 ignition[687]: no config at "/usr/lib/ignition/user.ign" Mar 12 04:07:20.150578 ignition[687]: failed to fetch config: resource requires networking Mar 12 04:07:20.150923 ignition[687]: Ignition finished successfully Mar 12 04:07:20.162262 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 12 04:07:20.199293 ignition[778]: Ignition 2.19.0 Mar 12 04:07:20.199318 ignition[778]: Stage: fetch Mar 12 04:07:20.199719 ignition[778]: no configs at "/usr/lib/ignition/base.d" Mar 12 04:07:20.199746 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 12 04:07:20.199958 ignition[778]: parsed url from cmdline: "" Mar 12 04:07:20.201349 ignition[778]: no config URL provided Mar 12 04:07:20.201363 ignition[778]: reading system config file "/usr/lib/ignition/user.ign" Mar 12 04:07:20.201383 ignition[778]: no config at "/usr/lib/ignition/user.ign" Mar 12 04:07:20.201651 ignition[778]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Mar 12 04:07:20.202477 ignition[778]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Mar 12 04:07:20.202505 ignition[778]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Mar 12 04:07:20.227882 ignition[778]: GET result: OK Mar 12 04:07:20.228072 ignition[778]: parsing config with SHA512: 9f335d88f8a623416b96290a962db181dae9bca01d4eebcf6e811c83b4d3113096e58a436f91c9326178f14823a485fe0e7c6f71f188d3da44909aa68d49f716 Mar 12 04:07:20.235629 unknown[778]: fetched base config from "system" Mar 12 04:07:20.235658 unknown[778]: fetched base config from "system" Mar 12 04:07:20.236545 ignition[778]: fetch: fetch complete Mar 12 04:07:20.235668 unknown[778]: fetched user config from "openstack" Mar 12 04:07:20.236554 ignition[778]: fetch: fetch passed Mar 12 04:07:20.239401 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 12 04:07:20.236630 ignition[778]: Ignition finished successfully Mar 12 04:07:20.245216 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 12 04:07:20.279309 ignition[784]: Ignition 2.19.0 Mar 12 04:07:20.279329 ignition[784]: Stage: kargs Mar 12 04:07:20.279584 ignition[784]: no configs at "/usr/lib/ignition/base.d" Mar 12 04:07:20.283049 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 12 04:07:20.279605 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 12 04:07:20.280872 ignition[784]: kargs: kargs passed Mar 12 04:07:20.280976 ignition[784]: Ignition finished successfully Mar 12 04:07:20.291205 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 12 04:07:20.315608 ignition[791]: Ignition 2.19.0 Mar 12 04:07:20.315630 ignition[791]: Stage: disks Mar 12 04:07:20.316003 ignition[791]: no configs at "/usr/lib/ignition/base.d" Mar 12 04:07:20.316026 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 12 04:07:20.318544 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 12 04:07:20.317337 ignition[791]: disks: disks passed Mar 12 04:07:20.320147 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 12 04:07:20.317409 ignition[791]: Ignition finished successfully Mar 12 04:07:20.321400 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 12 04:07:20.322886 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 12 04:07:20.324203 systemd[1]: Reached target sysinit.target - System Initialization. Mar 12 04:07:20.324880 systemd[1]: Reached target basic.target - Basic System. Mar 12 04:07:20.333279 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 12 04:07:20.356120 systemd-fsck[800]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 12 04:07:20.361316 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 12 04:07:20.367390 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 12 04:07:20.516023 kernel: EXT4-fs (vda9): mounted filesystem f90926b1-4cc2-4a2d-8c45-4ec584c98779 r/w with ordered data mode. Quota mode: none. Mar 12 04:07:20.516769 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 12 04:07:20.518147 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 12 04:07:20.533257 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 12 04:07:20.537105 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 12 04:07:20.538345 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 12 04:07:20.541245 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Mar 12 04:07:20.544058 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 12 04:07:20.544106 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 12 04:07:20.551863 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 12 04:07:20.561786 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (808) Mar 12 04:07:20.561834 kernel: BTRFS info (device vda6): first mount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 04:07:20.561875 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 12 04:07:20.561895 kernel: BTRFS info (device vda6): using free space tree Mar 12 04:07:20.564493 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 12 04:07:20.568402 kernel: BTRFS info (device vda6): auto enabling async discard Mar 12 04:07:20.571121 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 12 04:07:20.721590 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Mar 12 04:07:20.722766 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Mar 12 04:07:20.726025 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Mar 12 04:07:20.727027 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Mar 12 04:07:20.832638 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 12 04:07:20.842102 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 12 04:07:20.847204 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 12 04:07:20.856465 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 12 04:07:20.858653 kernel: BTRFS info (device vda6): last unmount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 04:07:20.895374 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 12 04:07:20.900558 ignition[924]: INFO : Ignition 2.19.0 Mar 12 04:07:20.900558 ignition[924]: INFO : Stage: mount Mar 12 04:07:20.902379 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 04:07:20.902379 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 12 04:07:20.902379 ignition[924]: INFO : mount: mount passed Mar 12 04:07:20.902379 ignition[924]: INFO : Ignition finished successfully Mar 12 04:07:20.904171 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 12 04:07:21.422301 systemd-networkd[770]: eth0: Gained IPv6LL Mar 12 04:07:22.934179 systemd-networkd[770]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:6b6:24:19ff:fef4:1ada/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:6b6:24:19ff:fef4:1ada/64 assigned by NDisc. Mar 12 04:07:22.934197 systemd-networkd[770]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Mar 12 04:07:27.811575 coreos-metadata[810]: Mar 12 04:07:27.811 WARN failed to locate config-drive, using the metadata service API instead Mar 12 04:07:27.834607 coreos-metadata[810]: Mar 12 04:07:27.834 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 12 04:07:27.851901 coreos-metadata[810]: Mar 12 04:07:27.851 INFO Fetch successful Mar 12 04:07:27.853110 coreos-metadata[810]: Mar 12 04:07:27.853 INFO wrote hostname srv-faxgs.gb1.brightbox.com to /sysroot/etc/hostname Mar 12 04:07:27.855309 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Mar 12 04:07:27.856721 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Mar 12 04:07:27.865118 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 12 04:07:27.887028 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 12 04:07:27.927997 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (941) Mar 12 04:07:27.931983 kernel: BTRFS info (device vda6): first mount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 04:07:27.935630 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 12 04:07:27.935768 kernel: BTRFS info (device vda6): using free space tree Mar 12 04:07:27.940005 kernel: BTRFS info (device vda6): auto enabling async discard Mar 12 04:07:27.943722 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 12 04:07:27.988099 ignition[958]: INFO : Ignition 2.19.0 Mar 12 04:07:27.992332 ignition[958]: INFO : Stage: files Mar 12 04:07:27.993721 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 04:07:27.994660 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 12 04:07:27.997525 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Mar 12 04:07:28.000586 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 12 04:07:28.001640 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 12 04:07:28.008644 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 12 04:07:28.010015 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 12 04:07:28.011256 unknown[958]: wrote ssh authorized keys file for user: core Mar 12 04:07:28.012672 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 12 04:07:28.013812 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 12 04:07:28.015033 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 12 04:07:28.015033 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 12 04:07:28.015033 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 12 04:07:28.225078 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 12 04:07:28.552240 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 12 04:07:28.552240 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 12 04:07:28.552240 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 12 04:07:28.785764 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Mar 12 04:07:29.111000 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 12 04:07:29.113091 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Mar 12 04:07:29.113091 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Mar 12 04:07:29.113091 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 12 04:07:29.113091 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 12 04:07:29.113091 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 12 04:07:29.113091 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 12 04:07:29.113091 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 12 04:07:29.113091 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 12 04:07:29.113091 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 12 04:07:29.123855 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 12 04:07:29.123855 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 12 04:07:29.123855 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 12 04:07:29.123855 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 12 04:07:29.123855 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 12 04:07:29.407379 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Mar 12 04:07:30.997173 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 12 04:07:30.997173 ignition[958]: INFO : files: op(d): [started] processing unit "containerd.service" Mar 12 04:07:31.002090 ignition[958]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 12 04:07:31.002090 ignition[958]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 12 04:07:31.002090 ignition[958]: INFO : files: op(d): [finished] processing unit "containerd.service" Mar 12 04:07:31.002090 ignition[958]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Mar 12 04:07:31.002090 ignition[958]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 12 04:07:31.002090 ignition[958]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 12 04:07:31.002090 ignition[958]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Mar 12 04:07:31.002090 ignition[958]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 12 04:07:31.002090 ignition[958]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 12 04:07:31.002090 ignition[958]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 12 04:07:31.002090 ignition[958]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 12 04:07:31.002090 ignition[958]: INFO : files: files passed Mar 12 04:07:31.002090 ignition[958]: INFO : Ignition finished successfully Mar 12 04:07:31.002441 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 12 04:07:31.013289 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 12 04:07:31.026143 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 12 04:07:31.045838 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 12 04:07:31.047819 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 12 04:07:31.047819 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 12 04:07:31.048716 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 12 04:07:31.050438 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 12 04:07:31.058241 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 12 04:07:31.098731 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 12 04:07:31.098903 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 12 04:07:31.100846 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 12 04:07:31.102277 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 12 04:07:31.116886 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 12 04:07:31.117068 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 12 04:07:31.119276 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 12 04:07:31.125216 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 12 04:07:31.153798 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 12 04:07:31.160181 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 12 04:07:31.174948 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 12 04:07:31.176789 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 04:07:31.177767 systemd[1]: Stopped target timers.target - Timer Units. Mar 12 04:07:31.179282 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 12 04:07:31.179466 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 12 04:07:31.181304 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 12 04:07:31.182239 systemd[1]: Stopped target basic.target - Basic System. Mar 12 04:07:31.183766 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 12 04:07:31.185275 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 12 04:07:31.186679 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 12 04:07:31.188255 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 12 04:07:31.189788 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 12 04:07:31.191638 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 12 04:07:31.193199 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 12 04:07:31.194774 systemd[1]: Stopped target swap.target - Swaps. Mar 12 04:07:31.196279 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 12 04:07:31.196473 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 12 04:07:31.198247 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 12 04:07:31.199253 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 04:07:31.200613 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 12 04:07:31.201002 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 04:07:31.202140 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 12 04:07:31.202310 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 12 04:07:31.204335 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 12 04:07:31.204505 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 12 04:07:31.206167 systemd[1]: ignition-files.service: Deactivated successfully. Mar 12 04:07:31.206322 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 12 04:07:31.220071 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 12 04:07:31.223616 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 12 04:07:31.225144 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 12 04:07:31.226317 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 04:07:31.228681 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 12 04:07:31.230040 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 12 04:07:31.243521 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 12 04:07:31.244245 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 12 04:07:31.250051 ignition[1011]: INFO : Ignition 2.19.0 Mar 12 04:07:31.250051 ignition[1011]: INFO : Stage: umount Mar 12 04:07:31.250051 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 04:07:31.250051 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 12 04:07:31.256462 ignition[1011]: INFO : umount: umount passed Mar 12 04:07:31.256462 ignition[1011]: INFO : Ignition finished successfully Mar 12 04:07:31.256153 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 12 04:07:31.256335 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 12 04:07:31.258491 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 12 04:07:31.258584 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 12 04:07:31.260162 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 12 04:07:31.260243 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 12 04:07:31.260951 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 12 04:07:31.262385 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 12 04:07:31.265094 systemd[1]: Stopped target network.target - Network. Mar 12 04:07:31.265721 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 12 04:07:31.265797 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 12 04:07:31.266650 systemd[1]: Stopped target paths.target - Path Units. Mar 12 04:07:31.268011 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 12 04:07:31.270149 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 04:07:31.271354 systemd[1]: Stopped target slices.target - Slice Units. Mar 12 04:07:31.272678 systemd[1]: Stopped target sockets.target - Socket Units. Mar 12 04:07:31.274305 systemd[1]: iscsid.socket: Deactivated successfully. Mar 12 04:07:31.274375 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 12 04:07:31.275885 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 12 04:07:31.275954 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 12 04:07:31.277254 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 12 04:07:31.277328 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 12 04:07:31.278742 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 12 04:07:31.278813 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 12 04:07:31.280817 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 12 04:07:31.282755 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 12 04:07:31.286027 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 12 04:07:31.286840 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 12 04:07:31.287019 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 12 04:07:31.287210 systemd-networkd[770]: eth0: DHCPv6 lease lost Mar 12 04:07:31.289621 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 12 04:07:31.289777 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 12 04:07:31.294575 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 12 04:07:31.294787 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 12 04:07:31.298403 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 12 04:07:31.298985 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 12 04:07:31.301444 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 12 04:07:31.301540 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 12 04:07:31.311527 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 12 04:07:31.312240 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 12 04:07:31.312321 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 12 04:07:31.313593 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 12 04:07:31.313691 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 12 04:07:31.316013 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 12 04:07:31.316089 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 12 04:07:31.317781 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 12 04:07:31.317853 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 04:07:31.319522 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 04:07:31.330442 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 12 04:07:31.330741 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 04:07:31.333887 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 12 04:07:31.334734 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 12 04:07:31.336066 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 12 04:07:31.336132 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 04:07:31.337703 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 12 04:07:31.337777 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 12 04:07:31.341663 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 12 04:07:31.341738 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 12 04:07:31.343224 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 12 04:07:31.343307 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 04:07:31.352254 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 12 04:07:31.353098 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 12 04:07:31.353182 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 04:07:31.356983 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 12 04:07:31.357063 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 12 04:07:31.361168 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 12 04:07:31.361242 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 04:07:31.362852 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 04:07:31.362921 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 04:07:31.367288 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 12 04:07:31.367431 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 12 04:07:31.368717 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 12 04:07:31.368860 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 12 04:07:31.370769 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 12 04:07:31.382223 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 12 04:07:31.393166 systemd[1]: Switching root. Mar 12 04:07:31.424149 systemd-journald[203]: Journal stopped Mar 12 04:07:33.015863 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Mar 12 04:07:33.016012 kernel: SELinux: policy capability network_peer_controls=1 Mar 12 04:07:33.016063 kernel: SELinux: policy capability open_perms=1 Mar 12 04:07:33.016091 kernel: SELinux: policy capability extended_socket_class=1 Mar 12 04:07:33.016112 kernel: SELinux: policy capability always_check_network=0 Mar 12 04:07:33.016130 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 12 04:07:33.016149 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 12 04:07:33.016169 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 12 04:07:33.016187 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 12 04:07:33.016205 kernel: audit: type=1403 audit(1773288451.719:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 12 04:07:33.016243 systemd[1]: Successfully loaded SELinux policy in 48.918ms. Mar 12 04:07:33.016269 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.126ms. Mar 12 04:07:33.016291 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 12 04:07:33.016311 systemd[1]: Detected virtualization kvm. Mar 12 04:07:33.016362 systemd[1]: Detected architecture x86-64. Mar 12 04:07:33.016391 systemd[1]: Detected first boot. Mar 12 04:07:33.016412 systemd[1]: Hostname set to . Mar 12 04:07:33.016433 systemd[1]: Initializing machine ID from VM UUID. Mar 12 04:07:33.016459 zram_generator::config[1074]: No configuration found. Mar 12 04:07:33.016496 systemd[1]: Populated /etc with preset unit settings. Mar 12 04:07:33.016526 systemd[1]: Queued start job for default target multi-user.target. Mar 12 04:07:33.016548 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 12 04:07:33.016577 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 12 04:07:33.016611 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 12 04:07:33.016634 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 12 04:07:33.016655 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 12 04:07:33.016676 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 12 04:07:33.016711 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 12 04:07:33.016734 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 12 04:07:33.016754 systemd[1]: Created slice user.slice - User and Session Slice. Mar 12 04:07:33.016775 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 04:07:33.016796 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 04:07:33.016816 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 12 04:07:33.016835 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 12 04:07:33.016862 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 12 04:07:33.016921 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 12 04:07:33.016944 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 12 04:07:33.016980 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 04:07:33.017004 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 12 04:07:33.017024 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 04:07:33.017053 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 12 04:07:33.017074 systemd[1]: Reached target slices.target - Slice Units. Mar 12 04:07:33.017111 systemd[1]: Reached target swap.target - Swaps. Mar 12 04:07:33.017147 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 12 04:07:33.017182 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 12 04:07:33.017210 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 12 04:07:33.017232 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 12 04:07:33.017252 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 12 04:07:33.017277 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 12 04:07:33.017299 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 04:07:33.017319 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 12 04:07:33.017339 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 12 04:07:33.017360 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 12 04:07:33.017380 systemd[1]: Mounting media.mount - External Media Directory... Mar 12 04:07:33.017401 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 04:07:33.017428 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 12 04:07:33.017477 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 12 04:07:33.017514 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 12 04:07:33.017543 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 12 04:07:33.017566 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 04:07:33.017587 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 12 04:07:33.017618 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 12 04:07:33.017641 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 04:07:33.017661 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 12 04:07:33.017682 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 04:07:33.017709 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 12 04:07:33.017730 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 04:07:33.017751 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 12 04:07:33.017773 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 12 04:07:33.017794 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Mar 12 04:07:33.017814 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 12 04:07:33.017834 kernel: loop: module loaded Mar 12 04:07:33.017854 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 12 04:07:33.017874 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 12 04:07:33.017900 kernel: ACPI: bus type drm_connector registered Mar 12 04:07:33.017920 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 12 04:07:33.017941 kernel: fuse: init (API version 7.39) Mar 12 04:07:33.018009 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 12 04:07:33.018033 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 04:07:33.018061 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 12 04:07:33.018083 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 12 04:07:33.018104 systemd[1]: Mounted media.mount - External Media Directory. Mar 12 04:07:33.018124 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 12 04:07:33.018152 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 12 04:07:33.018175 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 12 04:07:33.018227 systemd-journald[1178]: Collecting audit messages is disabled. Mar 12 04:07:33.018264 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 12 04:07:33.018285 systemd-journald[1178]: Journal started Mar 12 04:07:33.018332 systemd-journald[1178]: Runtime Journal (/run/log/journal/95b33508410b4dec8448333af5c647e0) is 4.7M, max 38.0M, 33.2M free. Mar 12 04:07:33.023986 systemd[1]: Started systemd-journald.service - Journal Service. Mar 12 04:07:33.025569 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 04:07:33.026799 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 12 04:07:33.027095 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 12 04:07:33.028321 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 04:07:33.028563 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 04:07:33.030062 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 12 04:07:33.030302 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 12 04:07:33.031471 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 04:07:33.031724 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 04:07:33.033095 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 12 04:07:33.033330 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 12 04:07:33.034462 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 04:07:33.034769 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 04:07:33.036824 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 12 04:07:33.039450 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 12 04:07:33.041614 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 12 04:07:33.063271 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 12 04:07:33.070084 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 12 04:07:33.080080 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 12 04:07:33.080921 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 12 04:07:33.094171 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 12 04:07:33.111270 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 12 04:07:33.113078 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 12 04:07:33.115897 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 12 04:07:33.119121 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 12 04:07:33.126291 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 12 04:07:33.140442 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 12 04:07:33.147894 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 12 04:07:33.148853 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 12 04:07:33.155294 systemd-journald[1178]: Time spent on flushing to /var/log/journal/95b33508410b4dec8448333af5c647e0 is 66.942ms for 1129 entries. Mar 12 04:07:33.155294 systemd-journald[1178]: System Journal (/var/log/journal/95b33508410b4dec8448333af5c647e0) is 8.0M, max 584.8M, 576.8M free. Mar 12 04:07:33.330073 systemd-journald[1178]: Received client request to flush runtime journal. Mar 12 04:07:33.238853 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 12 04:07:33.239959 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 12 04:07:33.295563 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 12 04:07:33.309557 systemd-tmpfiles[1226]: ACLs are not supported, ignoring. Mar 12 04:07:33.309577 systemd-tmpfiles[1226]: ACLs are not supported, ignoring. Mar 12 04:07:33.325118 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 12 04:07:33.337373 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 12 04:07:33.341240 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 12 04:07:33.360562 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 04:07:33.371313 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 12 04:07:33.386584 udevadm[1246]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 12 04:07:33.399553 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 12 04:07:33.411148 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 12 04:07:33.433663 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Mar 12 04:07:33.433692 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Mar 12 04:07:33.441688 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 04:07:34.100083 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 12 04:07:34.111369 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 04:07:34.159113 systemd-udevd[1255]: Using default interface naming scheme 'v255'. Mar 12 04:07:34.189316 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 04:07:34.203189 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 12 04:07:34.232177 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 12 04:07:34.300283 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 12 04:07:34.314918 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Mar 12 04:07:34.423175 systemd-networkd[1260]: lo: Link UP Mar 12 04:07:34.423807 systemd-networkd[1260]: lo: Gained carrier Mar 12 04:07:34.426471 systemd-networkd[1260]: Enumeration completed Mar 12 04:07:34.426751 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 12 04:07:34.428462 systemd-networkd[1260]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 04:07:34.431018 systemd-networkd[1260]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 12 04:07:34.432866 systemd-networkd[1260]: eth0: Link UP Mar 12 04:07:34.433012 systemd-networkd[1260]: eth0: Gained carrier Mar 12 04:07:34.433222 systemd-networkd[1260]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 04:07:34.445174 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 12 04:07:34.461120 systemd-networkd[1260]: eth0: DHCPv4 address 10.244.26.218/30, gateway 10.244.26.217 acquired from 10.244.26.217 Mar 12 04:07:34.467070 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1270) Mar 12 04:07:34.550605 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 12 04:07:34.550698 kernel: mousedev: PS/2 mouse device common for all mice Mar 12 04:07:34.573563 systemd-networkd[1260]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 04:07:34.579001 kernel: ACPI: button: Power Button [PWRF] Mar 12 04:07:34.608801 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 12 04:07:34.627553 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 12 04:07:34.627982 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 12 04:07:34.628257 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 12 04:07:34.701026 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Mar 12 04:07:34.733065 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 04:07:34.919431 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 12 04:07:34.942939 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 04:07:34.950185 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 12 04:07:34.971989 lvm[1295]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 12 04:07:35.008593 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 12 04:07:35.009899 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 12 04:07:35.017289 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 12 04:07:35.025745 lvm[1299]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 12 04:07:35.055362 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 12 04:07:35.056515 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 12 04:07:35.057442 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 12 04:07:35.057486 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 12 04:07:35.058253 systemd[1]: Reached target machines.target - Containers. Mar 12 04:07:35.060667 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 12 04:07:35.067215 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 12 04:07:35.071166 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 12 04:07:35.072196 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 04:07:35.081155 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 12 04:07:35.084629 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 12 04:07:35.098171 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 12 04:07:35.101198 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 12 04:07:35.108714 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 12 04:07:35.132017 kernel: loop0: detected capacity change from 0 to 142488 Mar 12 04:07:35.145285 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 12 04:07:35.146310 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 12 04:07:35.168660 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 12 04:07:35.195008 kernel: loop1: detected capacity change from 0 to 228704 Mar 12 04:07:35.273025 kernel: loop2: detected capacity change from 0 to 8 Mar 12 04:07:35.305172 kernel: loop3: detected capacity change from 0 to 140768 Mar 12 04:07:35.384017 kernel: loop4: detected capacity change from 0 to 142488 Mar 12 04:07:35.416007 kernel: loop5: detected capacity change from 0 to 228704 Mar 12 04:07:35.432296 kernel: loop6: detected capacity change from 0 to 8 Mar 12 04:07:35.436031 kernel: loop7: detected capacity change from 0 to 140768 Mar 12 04:07:35.463565 (sd-merge)[1320]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Mar 12 04:07:35.466718 (sd-merge)[1320]: Merged extensions into '/usr'. Mar 12 04:07:35.473878 systemd[1]: Reloading requested from client PID 1307 ('systemd-sysext') (unit systemd-sysext.service)... Mar 12 04:07:35.473919 systemd[1]: Reloading... Mar 12 04:07:35.607435 zram_generator::config[1345]: No configuration found. Mar 12 04:07:35.696131 systemd-networkd[1260]: eth0: Gained IPv6LL Mar 12 04:07:35.961859 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 04:07:36.018412 ldconfig[1303]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 12 04:07:36.058660 systemd[1]: Reloading finished in 583 ms. Mar 12 04:07:36.081465 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 12 04:07:36.083062 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 12 04:07:36.084424 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 12 04:07:36.096268 systemd[1]: Starting ensure-sysext.service... Mar 12 04:07:36.113314 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 12 04:07:36.123934 systemd[1]: Reloading requested from client PID 1413 ('systemctl') (unit ensure-sysext.service)... Mar 12 04:07:36.124149 systemd[1]: Reloading... Mar 12 04:07:36.162110 systemd-tmpfiles[1414]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 12 04:07:36.162715 systemd-tmpfiles[1414]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 12 04:07:36.165640 systemd-tmpfiles[1414]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 12 04:07:36.167139 systemd-tmpfiles[1414]: ACLs are not supported, ignoring. Mar 12 04:07:36.167273 systemd-tmpfiles[1414]: ACLs are not supported, ignoring. Mar 12 04:07:36.175011 systemd-tmpfiles[1414]: Detected autofs mount point /boot during canonicalization of boot. Mar 12 04:07:36.175029 systemd-tmpfiles[1414]: Skipping /boot Mar 12 04:07:36.201329 systemd-tmpfiles[1414]: Detected autofs mount point /boot during canonicalization of boot. Mar 12 04:07:36.201349 systemd-tmpfiles[1414]: Skipping /boot Mar 12 04:07:36.215113 systemd-networkd[1260]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:6b6:24:19ff:fef4:1ada/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:6b6:24:19ff:fef4:1ada/64 assigned by NDisc. Mar 12 04:07:36.215127 systemd-networkd[1260]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Mar 12 04:07:36.224988 zram_generator::config[1439]: No configuration found. Mar 12 04:07:36.442351 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 04:07:36.533930 systemd[1]: Reloading finished in 409 ms. Mar 12 04:07:36.558351 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 04:07:36.619391 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 12 04:07:36.627157 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 12 04:07:36.636153 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 12 04:07:36.653194 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 12 04:07:36.665190 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 12 04:07:36.679672 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 04:07:36.681013 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 04:07:36.691321 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 04:07:36.704319 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 04:07:36.711264 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 04:07:36.716250 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 04:07:36.717239 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 04:07:36.719838 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 12 04:07:36.722779 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 04:07:36.725086 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 04:07:36.727862 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 04:07:36.734602 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 04:07:36.743739 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 04:07:36.748181 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 04:07:36.759113 augenrules[1533]: No rules Mar 12 04:07:36.765054 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 12 04:07:36.774687 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 12 04:07:36.784679 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 04:07:36.785364 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 04:07:36.793394 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 04:07:36.798070 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 12 04:07:36.814927 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 04:07:36.840420 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 04:07:36.840902 systemd-resolved[1518]: Positive Trust Anchors: Mar 12 04:07:36.841386 systemd-resolved[1518]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 12 04:07:36.841409 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 04:07:36.841636 systemd-resolved[1518]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 12 04:07:36.848252 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 12 04:07:36.850726 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 04:07:36.851557 systemd-resolved[1518]: Using system hostname 'srv-faxgs.gb1.brightbox.com'. Mar 12 04:07:36.858866 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 12 04:07:36.861563 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 12 04:07:36.871557 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 04:07:36.871820 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 04:07:36.873735 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 12 04:07:36.874005 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 12 04:07:36.875675 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 04:07:36.875921 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 04:07:36.877703 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 04:07:36.879241 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 04:07:36.887617 systemd[1]: Finished ensure-sysext.service. Mar 12 04:07:36.890863 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 12 04:07:36.897933 systemd[1]: Reached target network.target - Network. Mar 12 04:07:36.898811 systemd[1]: Reached target network-online.target - Network is Online. Mar 12 04:07:36.899675 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 12 04:07:36.900481 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 12 04:07:36.900597 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 12 04:07:36.908190 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 12 04:07:36.909160 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 12 04:07:36.989303 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 12 04:07:36.990815 systemd[1]: Reached target sysinit.target - System Initialization. Mar 12 04:07:36.991678 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 12 04:07:36.992498 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 12 04:07:36.994536 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 12 04:07:36.995378 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 12 04:07:36.995430 systemd[1]: Reached target paths.target - Path Units. Mar 12 04:07:36.996121 systemd[1]: Reached target time-set.target - System Time Set. Mar 12 04:07:36.997098 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 12 04:07:36.998003 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 12 04:07:36.998804 systemd[1]: Reached target timers.target - Timer Units. Mar 12 04:07:37.000412 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 12 04:07:37.003659 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 12 04:07:37.006451 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 12 04:07:37.013523 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 12 04:07:37.014466 systemd[1]: Reached target sockets.target - Socket Units. Mar 12 04:07:37.015190 systemd[1]: Reached target basic.target - Basic System. Mar 12 04:07:37.016183 systemd[1]: System is tainted: cgroupsv1 Mar 12 04:07:37.016270 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 12 04:07:37.016313 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 12 04:07:37.033162 systemd[1]: Starting containerd.service - containerd container runtime... Mar 12 04:07:37.037181 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 12 04:07:37.047201 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 12 04:07:37.050089 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 12 04:07:37.056174 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 12 04:07:37.058040 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 12 04:07:37.070120 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 04:07:37.085903 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 12 04:07:37.102725 jq[1576]: false Mar 12 04:07:37.110818 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 12 04:07:37.116824 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 12 04:07:37.131155 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 12 04:07:37.146067 extend-filesystems[1578]: Found loop4 Mar 12 04:07:37.156014 extend-filesystems[1578]: Found loop5 Mar 12 04:07:37.156014 extend-filesystems[1578]: Found loop6 Mar 12 04:07:37.156014 extend-filesystems[1578]: Found loop7 Mar 12 04:07:37.156014 extend-filesystems[1578]: Found vda Mar 12 04:07:37.156014 extend-filesystems[1578]: Found vda1 Mar 12 04:07:37.156014 extend-filesystems[1578]: Found vda2 Mar 12 04:07:37.156014 extend-filesystems[1578]: Found vda3 Mar 12 04:07:37.156014 extend-filesystems[1578]: Found usr Mar 12 04:07:37.156014 extend-filesystems[1578]: Found vda4 Mar 12 04:07:37.156014 extend-filesystems[1578]: Found vda6 Mar 12 04:07:37.156014 extend-filesystems[1578]: Found vda7 Mar 12 04:07:37.156014 extend-filesystems[1578]: Found vda9 Mar 12 04:07:37.156014 extend-filesystems[1578]: Checking size of /dev/vda9 Mar 12 04:07:37.163836 dbus-daemon[1575]: [system] SELinux support is enabled Mar 12 04:07:37.166122 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 12 04:07:37.183060 dbus-daemon[1575]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1260 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 12 04:07:37.180173 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 12 04:07:37.191626 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 12 04:07:37.199905 extend-filesystems[1578]: Resized partition /dev/vda9 Mar 12 04:07:37.214054 extend-filesystems[1608]: resize2fs 1.47.1 (20-May-2024) Mar 12 04:07:37.205486 systemd[1]: Starting update-engine.service - Update Engine... Mar 12 04:07:37.210080 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 12 04:07:37.219720 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 12 04:07:37.239463 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Mar 12 04:07:37.236144 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 12 04:07:37.236539 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 12 04:07:37.240740 systemd[1]: motdgen.service: Deactivated successfully. Mar 12 04:07:37.241334 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 12 04:07:37.248140 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 12 04:07:37.256062 jq[1610]: true Mar 12 04:07:37.827409 systemd-resolved[1518]: Clock change detected. Flushing caches. Mar 12 04:07:37.827747 systemd-timesyncd[1568]: Contacted time server 77.95.181.60:123 (0.flatcar.pool.ntp.org). Mar 12 04:07:37.827827 systemd-timesyncd[1568]: Initial clock synchronization to Thu 2026-03-12 04:07:37.827337 UTC. Mar 12 04:07:37.842323 update_engine[1609]: I20260312 04:07:37.833997 1609 main.cc:92] Flatcar Update Engine starting Mar 12 04:07:37.842323 update_engine[1609]: I20260312 04:07:37.839661 1609 update_check_scheduler.cc:74] Next update check in 2m20s Mar 12 04:07:37.836708 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 12 04:07:37.837076 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 12 04:07:37.885110 jq[1620]: true Mar 12 04:07:37.898862 (ntainerd)[1621]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 12 04:07:37.923106 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 12 04:07:37.923183 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 12 04:07:37.924745 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 12 04:07:37.924777 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 12 04:07:37.929931 systemd[1]: Started update-engine.service - Update Engine. Mar 12 04:07:37.933006 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 12 04:07:37.933185 dbus-daemon[1575]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 12 04:07:37.942754 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 12 04:07:37.951594 tar[1617]: linux-amd64/LICENSE Mar 12 04:07:37.951594 tar[1617]: linux-amd64/helm Mar 12 04:07:37.962773 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 12 04:07:38.133712 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1643) Mar 12 04:07:38.288333 bash[1657]: Updated "/home/core/.ssh/authorized_keys" Mar 12 04:07:38.292155 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 12 04:07:38.306655 systemd-logind[1599]: Watching system buttons on /dev/input/event2 (Power Button) Mar 12 04:07:38.306713 systemd-logind[1599]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 12 04:07:38.313278 systemd[1]: Starting sshkeys.service... Mar 12 04:07:38.315231 systemd-logind[1599]: New seat seat0. Mar 12 04:07:38.342319 systemd[1]: Started systemd-logind.service - User Login Management. Mar 12 04:07:38.441591 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Mar 12 04:07:38.526840 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 12 04:07:38.556026 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 12 04:07:38.580448 extend-filesystems[1608]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 12 04:07:38.580448 extend-filesystems[1608]: old_desc_blocks = 1, new_desc_blocks = 8 Mar 12 04:07:38.580448 extend-filesystems[1608]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Mar 12 04:07:38.600696 extend-filesystems[1578]: Resized filesystem in /dev/vda9 Mar 12 04:07:38.584064 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 12 04:07:38.584475 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 12 04:07:38.694133 dbus-daemon[1575]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 12 04:07:38.694427 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 12 04:07:38.698716 dbus-daemon[1575]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1637 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 12 04:07:38.712484 systemd[1]: Starting polkit.service - Authorization Manager... Mar 12 04:07:38.787652 polkitd[1682]: Started polkitd version 121 Mar 12 04:07:38.878125 locksmithd[1635]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 12 04:07:38.894440 sshd_keygen[1619]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 12 04:07:38.909583 containerd[1621]: time="2026-03-12T04:07:38.908519471Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 12 04:07:38.920024 polkitd[1682]: Loading rules from directory /etc/polkit-1/rules.d Mar 12 04:07:38.920182 polkitd[1682]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 12 04:07:38.924502 polkitd[1682]: Finished loading, compiling and executing 2 rules Mar 12 04:07:38.927744 dbus-daemon[1575]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 12 04:07:38.928029 systemd[1]: Started polkit.service - Authorization Manager. Mar 12 04:07:38.930465 polkitd[1682]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 12 04:07:38.978946 systemd-hostnamed[1637]: Hostname set to (static) Mar 12 04:07:38.982449 containerd[1621]: time="2026-03-12T04:07:38.981759501Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 12 04:07:38.987465 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 12 04:07:38.995957 containerd[1621]: time="2026-03-12T04:07:38.995901780Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 12 04:07:38.996083 containerd[1621]: time="2026-03-12T04:07:38.996058327Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 12 04:07:39.002585 containerd[1621]: time="2026-03-12T04:07:38.996645312Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 12 04:07:39.002585 containerd[1621]: time="2026-03-12T04:07:38.996942162Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 12 04:07:39.002585 containerd[1621]: time="2026-03-12T04:07:38.996980418Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 12 04:07:39.002585 containerd[1621]: time="2026-03-12T04:07:38.997111325Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 04:07:39.002585 containerd[1621]: time="2026-03-12T04:07:38.997135717Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 12 04:07:39.002585 containerd[1621]: time="2026-03-12T04:07:38.997420223Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 04:07:39.002585 containerd[1621]: time="2026-03-12T04:07:38.997445466Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 12 04:07:39.002585 containerd[1621]: time="2026-03-12T04:07:38.997466423Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 04:07:39.002585 containerd[1621]: time="2026-03-12T04:07:38.997483239Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 12 04:07:39.003688 containerd[1621]: time="2026-03-12T04:07:39.003179922Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 12 04:07:39.006580 containerd[1621]: time="2026-03-12T04:07:39.004226163Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 12 04:07:39.004652 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 12 04:07:39.008639 containerd[1621]: time="2026-03-12T04:07:39.007084799Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 04:07:39.008745 containerd[1621]: time="2026-03-12T04:07:39.008717981Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 12 04:07:39.009955 containerd[1621]: time="2026-03-12T04:07:39.009006085Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 12 04:07:39.010584 containerd[1621]: time="2026-03-12T04:07:39.010130624Z" level=info msg="metadata content store policy set" policy=shared Mar 12 04:07:39.031824 containerd[1621]: time="2026-03-12T04:07:39.031765231Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 12 04:07:39.033401 containerd[1621]: time="2026-03-12T04:07:39.033368766Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 12 04:07:39.033527 containerd[1621]: time="2026-03-12T04:07:39.033501251Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 12 04:07:39.034937 containerd[1621]: time="2026-03-12T04:07:39.034906042Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 12 04:07:39.035070 containerd[1621]: time="2026-03-12T04:07:39.035044256Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 12 04:07:39.035444 containerd[1621]: time="2026-03-12T04:07:39.035416026Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 12 04:07:39.040586 containerd[1621]: time="2026-03-12T04:07:39.039714564Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 12 04:07:39.040586 containerd[1621]: time="2026-03-12T04:07:39.039961231Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 12 04:07:39.040586 containerd[1621]: time="2026-03-12T04:07:39.039990865Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 12 04:07:39.040586 containerd[1621]: time="2026-03-12T04:07:39.040014472Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 12 04:07:39.040586 containerd[1621]: time="2026-03-12T04:07:39.040037018Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 12 04:07:39.040586 containerd[1621]: time="2026-03-12T04:07:39.040065472Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 12 04:07:39.040586 containerd[1621]: time="2026-03-12T04:07:39.040145555Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 12 04:07:39.040586 containerd[1621]: time="2026-03-12T04:07:39.040177808Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 12 04:07:39.040586 containerd[1621]: time="2026-03-12T04:07:39.040202513Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 12 04:07:39.040586 containerd[1621]: time="2026-03-12T04:07:39.040223546Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 12 04:07:39.040586 containerd[1621]: time="2026-03-12T04:07:39.040254962Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 12 04:07:39.040586 containerd[1621]: time="2026-03-12T04:07:39.040277962Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 12 04:07:39.040586 containerd[1621]: time="2026-03-12T04:07:39.040330753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 12 04:07:39.040586 containerd[1621]: time="2026-03-12T04:07:39.040385809Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 12 04:07:39.041206 containerd[1621]: time="2026-03-12T04:07:39.040411976Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 12 04:07:39.041206 containerd[1621]: time="2026-03-12T04:07:39.040433734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 12 04:07:39.041206 containerd[1621]: time="2026-03-12T04:07:39.040462497Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 12 04:07:39.041206 containerd[1621]: time="2026-03-12T04:07:39.040494217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 12 04:07:39.041206 containerd[1621]: time="2026-03-12T04:07:39.040515873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 12 04:07:39.041206 containerd[1621]: time="2026-03-12T04:07:39.040536273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 12 04:07:39.043195 systemd[1]: issuegen.service: Deactivated successfully. Mar 12 04:07:39.043630 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 12 04:07:39.049304 containerd[1621]: time="2026-03-12T04:07:39.047723367Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 12 04:07:39.049304 containerd[1621]: time="2026-03-12T04:07:39.047799666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 12 04:07:39.049304 containerd[1621]: time="2026-03-12T04:07:39.047830487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 12 04:07:39.049304 containerd[1621]: time="2026-03-12T04:07:39.047866168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 12 04:07:39.049304 containerd[1621]: time="2026-03-12T04:07:39.047890688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 12 04:07:39.049304 containerd[1621]: time="2026-03-12T04:07:39.047927047Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 12 04:07:39.049304 containerd[1621]: time="2026-03-12T04:07:39.047986529Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 12 04:07:39.049304 containerd[1621]: time="2026-03-12T04:07:39.048013489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 12 04:07:39.049304 containerd[1621]: time="2026-03-12T04:07:39.048032732Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 12 04:07:39.049304 containerd[1621]: time="2026-03-12T04:07:39.048140347Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 12 04:07:39.049304 containerd[1621]: time="2026-03-12T04:07:39.048181575Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 12 04:07:39.049304 containerd[1621]: time="2026-03-12T04:07:39.048202368Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 12 04:07:39.049304 containerd[1621]: time="2026-03-12T04:07:39.048221967Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 12 04:07:39.049865 containerd[1621]: time="2026-03-12T04:07:39.048238502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 12 04:07:39.049865 containerd[1621]: time="2026-03-12T04:07:39.048269024Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 12 04:07:39.049865 containerd[1621]: time="2026-03-12T04:07:39.048293816Z" level=info msg="NRI interface is disabled by configuration." Mar 12 04:07:39.049865 containerd[1621]: time="2026-03-12T04:07:39.048313139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 12 04:07:39.055275 containerd[1621]: time="2026-03-12T04:07:39.051942110Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 12 04:07:39.055275 containerd[1621]: time="2026-03-12T04:07:39.052055526Z" level=info msg="Connect containerd service" Mar 12 04:07:39.055275 containerd[1621]: time="2026-03-12T04:07:39.052159529Z" level=info msg="using legacy CRI server" Mar 12 04:07:39.055275 containerd[1621]: time="2026-03-12T04:07:39.052188536Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 12 04:07:39.055275 containerd[1621]: time="2026-03-12T04:07:39.052413636Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 12 04:07:39.056350 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 12 04:07:39.062348 containerd[1621]: time="2026-03-12T04:07:39.062307637Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 12 04:07:39.063639 containerd[1621]: time="2026-03-12T04:07:39.063489843Z" level=info msg="Start subscribing containerd event" Mar 12 04:07:39.063728 containerd[1621]: time="2026-03-12T04:07:39.063697941Z" level=info msg="Start recovering state" Mar 12 04:07:39.063890 containerd[1621]: time="2026-03-12T04:07:39.063862548Z" level=info msg="Start event monitor" Mar 12 04:07:39.064331 containerd[1621]: time="2026-03-12T04:07:39.064303105Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 12 04:07:39.065327 containerd[1621]: time="2026-03-12T04:07:39.064553259Z" level=info msg="Start snapshots syncer" Mar 12 04:07:39.065398 containerd[1621]: time="2026-03-12T04:07:39.065340697Z" level=info msg="Start cni network conf syncer for default" Mar 12 04:07:39.065398 containerd[1621]: time="2026-03-12T04:07:39.065368139Z" level=info msg="Start streaming server" Mar 12 04:07:39.066710 containerd[1621]: time="2026-03-12T04:07:39.066682156Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 12 04:07:39.067915 containerd[1621]: time="2026-03-12T04:07:39.067887722Z" level=info msg="containerd successfully booted in 0.164437s" Mar 12 04:07:39.073043 systemd[1]: Started containerd.service - containerd container runtime. Mar 12 04:07:39.161831 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 12 04:07:39.180178 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 12 04:07:39.190129 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 12 04:07:39.192553 systemd[1]: Reached target getty.target - Login Prompts. Mar 12 04:07:39.847269 tar[1617]: linux-amd64/README.md Mar 12 04:07:39.873652 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 12 04:07:40.447824 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 04:07:40.469406 (kubelet)[1732]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 04:07:41.277896 kubelet[1732]: E0312 04:07:41.277795 1732 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 04:07:41.280467 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 04:07:41.280818 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 04:07:41.960110 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 12 04:07:41.970176 systemd[1]: Started sshd@0-10.244.26.218:22-20.161.92.111:40224.service - OpenSSH per-connection server daemon (20.161.92.111:40224). Mar 12 04:07:42.541680 sshd[1741]: Accepted publickey for core from 20.161.92.111 port 40224 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 04:07:42.544682 sshd[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 04:07:42.565339 systemd-logind[1599]: New session 1 of user core. Mar 12 04:07:42.568504 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 12 04:07:42.585056 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 12 04:07:42.612948 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 12 04:07:42.625428 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 12 04:07:42.648082 (systemd)[1747]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 12 04:07:42.807415 systemd[1747]: Queued start job for default target default.target. Mar 12 04:07:42.808656 systemd[1747]: Created slice app.slice - User Application Slice. Mar 12 04:07:42.809056 systemd[1747]: Reached target paths.target - Paths. Mar 12 04:07:42.809224 systemd[1747]: Reached target timers.target - Timers. Mar 12 04:07:42.814666 systemd[1747]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 12 04:07:42.826594 systemd[1747]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 12 04:07:42.826677 systemd[1747]: Reached target sockets.target - Sockets. Mar 12 04:07:42.826713 systemd[1747]: Reached target basic.target - Basic System. Mar 12 04:07:42.826805 systemd[1747]: Reached target default.target - Main User Target. Mar 12 04:07:42.826877 systemd[1747]: Startup finished in 167ms. Mar 12 04:07:42.827045 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 12 04:07:42.841339 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 12 04:07:43.261584 systemd[1]: Started sshd@1-10.244.26.218:22-20.161.92.111:40238.service - OpenSSH per-connection server daemon (20.161.92.111:40238). Mar 12 04:07:43.816166 sshd[1759]: Accepted publickey for core from 20.161.92.111 port 40238 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 04:07:43.818304 sshd[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 04:07:43.827447 systemd-logind[1599]: New session 2 of user core. Mar 12 04:07:43.835125 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 12 04:07:44.224908 sshd[1759]: pam_unix(sshd:session): session closed for user core Mar 12 04:07:44.230333 systemd[1]: sshd@1-10.244.26.218:22-20.161.92.111:40238.service: Deactivated successfully. Mar 12 04:07:44.239965 systemd[1]: session-2.scope: Deactivated successfully. Mar 12 04:07:44.240945 systemd-logind[1599]: Session 2 logged out. Waiting for processes to exit. Mar 12 04:07:44.245587 systemd-logind[1599]: Removed session 2. Mar 12 04:07:44.270184 login[1716]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 12 04:07:44.271745 login[1717]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 12 04:07:44.280945 systemd-logind[1599]: New session 3 of user core. Mar 12 04:07:44.293055 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 12 04:07:44.299261 systemd-logind[1599]: New session 4 of user core. Mar 12 04:07:44.300432 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 12 04:07:44.321963 systemd[1]: Started sshd@2-10.244.26.218:22-20.161.92.111:40254.service - OpenSSH per-connection server daemon (20.161.92.111:40254). Mar 12 04:07:44.884961 sshd[1773]: Accepted publickey for core from 20.161.92.111 port 40254 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 04:07:44.886921 sshd[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 04:07:44.892723 systemd-logind[1599]: New session 5 of user core. Mar 12 04:07:44.908226 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 12 04:07:44.938589 coreos-metadata[1574]: Mar 12 04:07:44.937 WARN failed to locate config-drive, using the metadata service API instead Mar 12 04:07:44.963936 coreos-metadata[1574]: Mar 12 04:07:44.963 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Mar 12 04:07:44.973205 coreos-metadata[1574]: Mar 12 04:07:44.973 INFO Fetch failed with 404: resource not found Mar 12 04:07:44.973205 coreos-metadata[1574]: Mar 12 04:07:44.973 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 12 04:07:44.974185 coreos-metadata[1574]: Mar 12 04:07:44.974 INFO Fetch successful Mar 12 04:07:44.974442 coreos-metadata[1574]: Mar 12 04:07:44.974 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Mar 12 04:07:44.989439 coreos-metadata[1574]: Mar 12 04:07:44.989 INFO Fetch successful Mar 12 04:07:44.989633 coreos-metadata[1574]: Mar 12 04:07:44.989 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Mar 12 04:07:45.007152 coreos-metadata[1574]: Mar 12 04:07:45.006 INFO Fetch successful Mar 12 04:07:45.007314 coreos-metadata[1574]: Mar 12 04:07:45.007 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Mar 12 04:07:45.023828 coreos-metadata[1574]: Mar 12 04:07:45.023 INFO Fetch successful Mar 12 04:07:45.024025 coreos-metadata[1574]: Mar 12 04:07:45.023 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Mar 12 04:07:45.040102 coreos-metadata[1574]: Mar 12 04:07:45.040 INFO Fetch successful Mar 12 04:07:45.077349 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 12 04:07:45.080107 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 12 04:07:45.286883 sshd[1773]: pam_unix(sshd:session): session closed for user core Mar 12 04:07:45.292543 systemd[1]: sshd@2-10.244.26.218:22-20.161.92.111:40254.service: Deactivated successfully. Mar 12 04:07:45.296254 systemd[1]: session-5.scope: Deactivated successfully. Mar 12 04:07:45.296591 systemd-logind[1599]: Session 5 logged out. Waiting for processes to exit. Mar 12 04:07:45.299694 systemd-logind[1599]: Removed session 5. Mar 12 04:07:45.754407 coreos-metadata[1673]: Mar 12 04:07:45.753 WARN failed to locate config-drive, using the metadata service API instead Mar 12 04:07:45.776846 coreos-metadata[1673]: Mar 12 04:07:45.776 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Mar 12 04:07:45.800726 coreos-metadata[1673]: Mar 12 04:07:45.800 INFO Fetch successful Mar 12 04:07:45.800968 coreos-metadata[1673]: Mar 12 04:07:45.800 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 12 04:07:45.827100 coreos-metadata[1673]: Mar 12 04:07:45.827 INFO Fetch successful Mar 12 04:07:45.829219 unknown[1673]: wrote ssh authorized keys file for user: core Mar 12 04:07:45.852341 update-ssh-keys[1816]: Updated "/home/core/.ssh/authorized_keys" Mar 12 04:07:45.853369 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 12 04:07:45.862067 systemd[1]: Finished sshkeys.service. Mar 12 04:07:45.867435 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 12 04:07:45.868059 systemd[1]: Startup finished in 16.570s (kernel) + 13.630s (userspace) = 30.201s. Mar 12 04:07:51.531250 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 12 04:07:51.546049 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 04:07:51.916760 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 04:07:51.921760 (kubelet)[1835]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 04:07:52.006148 kubelet[1835]: E0312 04:07:52.006041 1835 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 04:07:52.011833 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 04:07:52.012191 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 04:07:55.384979 systemd[1]: Started sshd@3-10.244.26.218:22-20.161.92.111:47508.service - OpenSSH per-connection server daemon (20.161.92.111:47508). Mar 12 04:07:55.953820 sshd[1842]: Accepted publickey for core from 20.161.92.111 port 47508 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 04:07:55.955883 sshd[1842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 04:07:55.964870 systemd-logind[1599]: New session 6 of user core. Mar 12 04:07:55.970055 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 12 04:07:56.351940 sshd[1842]: pam_unix(sshd:session): session closed for user core Mar 12 04:07:56.357177 systemd[1]: sshd@3-10.244.26.218:22-20.161.92.111:47508.service: Deactivated successfully. Mar 12 04:07:56.357330 systemd-logind[1599]: Session 6 logged out. Waiting for processes to exit. Mar 12 04:07:56.363011 systemd[1]: session-6.scope: Deactivated successfully. Mar 12 04:07:56.364391 systemd-logind[1599]: Removed session 6. Mar 12 04:07:56.455864 systemd[1]: Started sshd@4-10.244.26.218:22-20.161.92.111:47510.service - OpenSSH per-connection server daemon (20.161.92.111:47510). Mar 12 04:07:57.016637 sshd[1850]: Accepted publickey for core from 20.161.92.111 port 47510 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 04:07:57.018696 sshd[1850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 04:07:57.024927 systemd-logind[1599]: New session 7 of user core. Mar 12 04:07:57.033033 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 12 04:07:57.412966 sshd[1850]: pam_unix(sshd:session): session closed for user core Mar 12 04:07:57.420452 systemd[1]: sshd@4-10.244.26.218:22-20.161.92.111:47510.service: Deactivated successfully. Mar 12 04:07:57.422041 systemd-logind[1599]: Session 7 logged out. Waiting for processes to exit. Mar 12 04:07:57.425312 systemd[1]: session-7.scope: Deactivated successfully. Mar 12 04:07:57.427101 systemd-logind[1599]: Removed session 7. Mar 12 04:07:57.513998 systemd[1]: Started sshd@5-10.244.26.218:22-20.161.92.111:47514.service - OpenSSH per-connection server daemon (20.161.92.111:47514). Mar 12 04:07:58.065495 sshd[1858]: Accepted publickey for core from 20.161.92.111 port 47514 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 04:07:58.068078 sshd[1858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 04:07:58.076642 systemd-logind[1599]: New session 8 of user core. Mar 12 04:07:58.085433 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 12 04:07:58.464144 sshd[1858]: pam_unix(sshd:session): session closed for user core Mar 12 04:07:58.470386 systemd[1]: sshd@5-10.244.26.218:22-20.161.92.111:47514.service: Deactivated successfully. Mar 12 04:07:58.474536 systemd-logind[1599]: Session 8 logged out. Waiting for processes to exit. Mar 12 04:07:58.476316 systemd[1]: session-8.scope: Deactivated successfully. Mar 12 04:07:58.477967 systemd-logind[1599]: Removed session 8. Mar 12 04:07:58.566030 systemd[1]: Started sshd@6-10.244.26.218:22-20.161.92.111:47522.service - OpenSSH per-connection server daemon (20.161.92.111:47522). Mar 12 04:07:59.123613 sshd[1866]: Accepted publickey for core from 20.161.92.111 port 47522 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 04:07:59.126017 sshd[1866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 04:07:59.134235 systemd-logind[1599]: New session 9 of user core. Mar 12 04:07:59.140258 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 12 04:07:59.449707 sudo[1870]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 12 04:07:59.450290 sudo[1870]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 04:07:59.474750 sudo[1870]: pam_unix(sudo:session): session closed for user root Mar 12 04:07:59.564146 sshd[1866]: pam_unix(sshd:session): session closed for user core Mar 12 04:07:59.570501 systemd[1]: sshd@6-10.244.26.218:22-20.161.92.111:47522.service: Deactivated successfully. Mar 12 04:07:59.575106 systemd-logind[1599]: Session 9 logged out. Waiting for processes to exit. Mar 12 04:07:59.575398 systemd[1]: session-9.scope: Deactivated successfully. Mar 12 04:07:59.578392 systemd-logind[1599]: Removed session 9. Mar 12 04:07:59.668459 systemd[1]: Started sshd@7-10.244.26.218:22-20.161.92.111:47532.service - OpenSSH per-connection server daemon (20.161.92.111:47532). Mar 12 04:08:00.243628 sshd[1875]: Accepted publickey for core from 20.161.92.111 port 47532 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 04:08:00.245376 sshd[1875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 04:08:00.252122 systemd-logind[1599]: New session 10 of user core. Mar 12 04:08:00.260051 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 12 04:08:00.563681 sudo[1880]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 12 04:08:00.564192 sudo[1880]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 04:08:00.570335 sudo[1880]: pam_unix(sudo:session): session closed for user root Mar 12 04:08:00.580791 sudo[1879]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 12 04:08:00.581282 sudo[1879]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 04:08:00.602034 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 12 04:08:00.620147 auditctl[1883]: No rules Mar 12 04:08:00.621075 systemd[1]: audit-rules.service: Deactivated successfully. Mar 12 04:08:00.621508 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 12 04:08:00.636276 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 12 04:08:00.680086 augenrules[1902]: No rules Mar 12 04:08:00.682116 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 12 04:08:00.686046 sudo[1879]: pam_unix(sudo:session): session closed for user root Mar 12 04:08:00.782942 sshd[1875]: pam_unix(sshd:session): session closed for user core Mar 12 04:08:00.789117 systemd[1]: sshd@7-10.244.26.218:22-20.161.92.111:47532.service: Deactivated successfully. Mar 12 04:08:00.797944 systemd[1]: session-10.scope: Deactivated successfully. Mar 12 04:08:00.799686 systemd-logind[1599]: Session 10 logged out. Waiting for processes to exit. Mar 12 04:08:00.805973 systemd-logind[1599]: Removed session 10. Mar 12 04:08:00.874028 systemd[1]: Started sshd@8-10.244.26.218:22-20.161.92.111:56344.service - OpenSSH per-connection server daemon (20.161.92.111:56344). Mar 12 04:08:01.435083 sshd[1911]: Accepted publickey for core from 20.161.92.111 port 56344 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 04:08:01.437606 sshd[1911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 04:08:01.445602 systemd-logind[1599]: New session 11 of user core. Mar 12 04:08:01.452286 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 12 04:08:01.746573 sudo[1915]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 12 04:08:01.747074 sudo[1915]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 04:08:02.057429 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 12 04:08:02.084978 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 04:08:02.396846 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 04:08:02.412244 (kubelet)[1942]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 04:08:02.508352 kubelet[1942]: E0312 04:08:02.506352 1942 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 04:08:02.511148 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 04:08:02.511487 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 04:08:02.662990 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 12 04:08:02.682713 (dockerd)[1950]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 12 04:08:03.477617 dockerd[1950]: time="2026-03-12T04:08:03.476302386Z" level=info msg="Starting up" Mar 12 04:08:03.631163 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3453981871-merged.mount: Deactivated successfully. Mar 12 04:08:03.802113 dockerd[1950]: time="2026-03-12T04:08:03.801819105Z" level=info msg="Loading containers: start." Mar 12 04:08:03.972852 kernel: Initializing XFRM netlink socket Mar 12 04:08:04.086832 systemd-networkd[1260]: docker0: Link UP Mar 12 04:08:04.109528 dockerd[1950]: time="2026-03-12T04:08:04.109266864Z" level=info msg="Loading containers: done." Mar 12 04:08:04.135243 dockerd[1950]: time="2026-03-12T04:08:04.135129966Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 12 04:08:04.135439 dockerd[1950]: time="2026-03-12T04:08:04.135291576Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 12 04:08:04.135509 dockerd[1950]: time="2026-03-12T04:08:04.135481284Z" level=info msg="Daemon has completed initialization" Mar 12 04:08:04.195182 dockerd[1950]: time="2026-03-12T04:08:04.195002415Z" level=info msg="API listen on /run/docker.sock" Mar 12 04:08:04.196261 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 12 04:08:04.957689 containerd[1621]: time="2026-03-12T04:08:04.957312187Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 12 04:08:05.722057 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1308307156.mount: Deactivated successfully. Mar 12 04:08:08.425947 containerd[1621]: time="2026-03-12T04:08:08.425833003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 04:08:08.428793 containerd[1621]: time="2026-03-12T04:08:08.428742407Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116194" Mar 12 04:08:08.429757 containerd[1621]: time="2026-03-12T04:08:08.429708380Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 04:08:08.435978 containerd[1621]: time="2026-03-12T04:08:08.435338917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 04:08:08.437592 containerd[1621]: time="2026-03-12T04:08:08.436929150Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 3.479457267s" Mar 12 04:08:08.437592 containerd[1621]: time="2026-03-12T04:08:08.437007510Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 12 04:08:08.438774 containerd[1621]: time="2026-03-12T04:08:08.438731914Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 12 04:08:09.017417 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 12 04:08:11.094801 containerd[1621]: time="2026-03-12T04:08:11.094725963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 04:08:11.098034 containerd[1621]: time="2026-03-12T04:08:11.097980693Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021818" Mar 12 04:08:11.098635 containerd[1621]: time="2026-03-12T04:08:11.098360079Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 04:08:11.103236 containerd[1621]: time="2026-03-12T04:08:11.103193526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 04:08:11.104961 containerd[1621]: time="2026-03-12T04:08:11.104920649Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 2.666138239s" Mar 12 04:08:11.105053 containerd[1621]: time="2026-03-12T04:08:11.104967085Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 12 04:08:11.105924 containerd[1621]: time="2026-03-12T04:08:11.105888362Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 12 04:08:12.543474 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 12 04:08:12.555906 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 04:08:12.842318 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 04:08:12.855329 (kubelet)[2174]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 04:08:12.992155 kubelet[2174]: E0312 04:08:12.992077 2174 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 04:08:12.994845 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 04:08:12.995154 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 04:08:13.479182 containerd[1621]: time="2026-03-12T04:08:13.479076912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 04:08:13.481144 containerd[1621]: time="2026-03-12T04:08:13.480979731Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162754" Mar 12 04:08:13.483591 containerd[1621]: time="2026-03-12T04:08:13.482025242Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 04:08:13.489979 containerd[1621]: time="2026-03-12T04:08:13.489932745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 04:08:13.491881 containerd[1621]: time="2026-03-12T04:08:13.491842671Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 2.385775436s" Mar 12 04:08:13.492028 containerd[1621]: time="2026-03-12T04:08:13.492000468Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 12 04:08:13.492993 containerd[1621]: time="2026-03-12T04:08:13.492847974Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 12 04:08:15.207610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1832054081.mount: Deactivated successfully. Mar 12 04:08:16.310289 containerd[1621]: time="2026-03-12T04:08:16.309168937Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 04:08:16.310289 containerd[1621]: time="2026-03-12T04:08:16.310235479Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828655" Mar 12 04:08:16.311264 containerd[1621]: time="2026-03-12T04:08:16.311196511Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 04:08:16.314159 containerd[1621]: time="2026-03-12T04:08:16.314099169Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 04:08:16.316368 containerd[1621]: time="2026-03-12T04:08:16.315264511Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 2.822160264s" Mar 12 04:08:16.316368 containerd[1621]: time="2026-03-12T04:08:16.315334455Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 12 04:08:16.316949 containerd[1621]: time="2026-03-12T04:08:16.316915923Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 12 04:08:17.180342 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount45393582.mount: Deactivated successfully. Mar 12 04:08:20.217195 containerd[1621]: time="2026-03-12T04:08:20.217132852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 04:08:20.221588 containerd[1621]: time="2026-03-12T04:08:20.219727922Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 04:08:20.221588 containerd[1621]: time="2026-03-12T04:08:20.219789776Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Mar 12 04:08:20.224109 containerd[1621]: time="2026-03-12T04:08:20.224076077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 04:08:20.225885 containerd[1621]: time="2026-03-12T04:08:20.225841509Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 3.90862657s" Mar 12 04:08:20.226019 containerd[1621]: time="2026-03-12T04:08:20.225890268Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 12 04:08:20.226675 containerd[1621]: time="2026-03-12T04:08:20.226642149Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 12 04:08:20.848098 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1678031491.mount: Deactivated successfully. Mar 12 04:08:20.855287 containerd[1621]: time="2026-03-12T04:08:20.855201027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 04:08:20.856977 containerd[1621]: time="2026-03-12T04:08:20.856499891Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Mar 12 04:08:20.861548 containerd[1621]: time="2026-03-12T04:08:20.860773682Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 04:08:20.864308 containerd[1621]: time="2026-03-12T04:08:20.864267311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 04:08:20.865575 containerd[1621]: time="2026-03-12T04:08:20.865525797Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 638.834767ms" Mar 12 04:08:20.865771 containerd[1621]: time="2026-03-12T04:08:20.865740351Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 12 04:08:20.866739 containerd[1621]: time="2026-03-12T04:08:20.866693447Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 12 04:08:21.530721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1807726780.mount: Deactivated successfully. Mar 12 04:08:22.681664 update_engine[1609]: I20260312 04:08:22.681056 1609 update_attempter.cc:509] Updating boot flags... Mar 12 04:08:22.787627 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2301) Mar 12 04:08:23.027366 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 12 04:08:23.037988 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 04:08:23.142638 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2301) Mar 12 04:08:23.632163 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 04:08:23.646053 (kubelet)[2328]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 04:08:23.813104 kubelet[2328]: E0312 04:08:23.812963 2328 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 04:08:23.816972 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 04:08:23.817481 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 04:08:23.849246 containerd[1621]: time="2026-03-12T04:08:23.849165042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 04:08:23.852887 containerd[1621]: time="2026-03-12T04:08:23.852757608Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718848" Mar 12 04:08:23.854578 containerd[1621]: time="2026-03-12T04:08:23.854159335Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 04:08:23.861427 containerd[1621]: time="2026-03-12T04:08:23.861372655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 04:08:23.863727 containerd[1621]: time="2026-03-12T04:08:23.863655527Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 2.996654794s" Mar 12 04:08:23.863862 containerd[1621]: time="2026-03-12T04:08:23.863729634Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 12 04:08:29.713597 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 04:08:29.728076 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 04:08:29.768791 systemd[1]: Reloading requested from client PID 2371 ('systemctl') (unit session-11.scope)... Mar 12 04:08:29.769056 systemd[1]: Reloading... Mar 12 04:08:29.964643 zram_generator::config[2411]: No configuration found. Mar 12 04:08:30.131708 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 04:08:30.240256 systemd[1]: Reloading finished in 470 ms. Mar 12 04:08:30.305545 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 12 04:08:30.305905 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 12 04:08:30.306856 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 04:08:30.318772 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 04:08:30.476820 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 04:08:30.490166 (kubelet)[2487]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 12 04:08:30.633194 kubelet[2487]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 04:08:30.634043 kubelet[2487]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 12 04:08:30.634043 kubelet[2487]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 04:08:30.636588 kubelet[2487]: I0312 04:08:30.635746 2487 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 12 04:08:32.331623 kubelet[2487]: I0312 04:08:32.331327 2487 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 12 04:08:32.331623 kubelet[2487]: I0312 04:08:32.331376 2487 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 04:08:32.332262 kubelet[2487]: I0312 04:08:32.331821 2487 server.go:956] "Client rotation is on, will bootstrap in background" Mar 12 04:08:32.366845 kubelet[2487]: E0312 04:08:32.366779 2487 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.244.26.218:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.26.218:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 12 04:08:32.369612 kubelet[2487]: I0312 04:08:32.369322 2487 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 12 04:08:32.385171 kubelet[2487]: E0312 04:08:32.385094 2487 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 12 04:08:32.385171 kubelet[2487]: I0312 04:08:32.385165 2487 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 12 04:08:32.396291 kubelet[2487]: I0312 04:08:32.396222 2487 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 12 04:08:32.399459 kubelet[2487]: I0312 04:08:32.399355 2487 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 04:08:32.402663 kubelet[2487]: I0312 04:08:32.399423 2487 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-faxgs.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 12 04:08:32.403036 kubelet[2487]: I0312 04:08:32.402683 2487 topology_manager.go:138] "Creating topology manager with none policy" Mar 12 04:08:32.403036 kubelet[2487]: I0312 04:08:32.402706 2487 container_manager_linux.go:303] "Creating device plugin manager" Mar 12 04:08:32.403036 kubelet[2487]: I0312 04:08:32.402993 2487 state_mem.go:36] "Initialized new in-memory state store" Mar 12 04:08:32.410426 kubelet[2487]: I0312 04:08:32.410339 2487 kubelet.go:480] "Attempting to sync node with API server" Mar 12 04:08:32.410426 kubelet[2487]: I0312 04:08:32.410388 2487 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 04:08:32.410621 kubelet[2487]: I0312 04:08:32.410471 2487 kubelet.go:386] "Adding apiserver pod source" Mar 12 04:08:32.413896 kubelet[2487]: I0312 04:08:32.413823 2487 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 04:08:32.424621 kubelet[2487]: E0312 04:08:32.423319 2487 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.244.26.218:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-faxgs.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.26.218:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 12 04:08:32.424621 kubelet[2487]: E0312 04:08:32.423894 2487 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.244.26.218:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.26.218:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 12 04:08:32.426225 kubelet[2487]: I0312 04:08:32.425299 2487 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 12 04:08:32.426225 kubelet[2487]: I0312 04:08:32.426083 2487 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 12 04:08:32.428173 kubelet[2487]: W0312 04:08:32.428149 2487 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 12 04:08:32.438589 kubelet[2487]: I0312 04:08:32.437847 2487 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 12 04:08:32.438589 kubelet[2487]: I0312 04:08:32.437944 2487 server.go:1289] "Started kubelet" Mar 12 04:08:32.440652 kubelet[2487]: I0312 04:08:32.440629 2487 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 12 04:08:32.442990 kubelet[2487]: I0312 04:08:32.441390 2487 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 04:08:32.443489 kubelet[2487]: I0312 04:08:32.443461 2487 server.go:317] "Adding debug handlers to kubelet server" Mar 12 04:08:32.449518 kubelet[2487]: I0312 04:08:32.449372 2487 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 04:08:32.449809 kubelet[2487]: I0312 04:08:32.449774 2487 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 04:08:32.450155 kubelet[2487]: I0312 04:08:32.450120 2487 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 12 04:08:32.455524 kubelet[2487]: E0312 04:08:32.454190 2487 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.26.218:6443/api/v1/namespaces/default/events\": dial tcp 10.244.26.218:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-faxgs.gb1.brightbox.com.189bfc81bc240763 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-faxgs.gb1.brightbox.com,UID:srv-faxgs.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-faxgs.gb1.brightbox.com,},FirstTimestamp:2026-03-12 04:08:32.437880675 +0000 UTC m=+1.852013040,LastTimestamp:2026-03-12 04:08:32.437880675 +0000 UTC m=+1.852013040,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-faxgs.gb1.brightbox.com,}" Mar 12 04:08:32.458511 kubelet[2487]: I0312 04:08:32.456979 2487 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 12 04:08:32.458511 kubelet[2487]: I0312 04:08:32.457222 2487 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 12 04:08:32.458511 kubelet[2487]: I0312 04:08:32.457379 2487 reconciler.go:26] "Reconciler: start to sync state" Mar 12 04:08:32.458511 kubelet[2487]: E0312 04:08:32.458086 2487 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.244.26.218:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.26.218:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 12 04:08:32.458761 kubelet[2487]: E0312 04:08:32.458531 2487 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-faxgs.gb1.brightbox.com\" not found" Mar 12 04:08:32.458829 kubelet[2487]: E0312 04:08:32.458716 2487 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.26.218:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-faxgs.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.26.218:6443: connect: connection refused" interval="200ms" Mar 12 04:08:32.464667 kubelet[2487]: I0312 04:08:32.464635 2487 factory.go:223] Registration of the systemd container factory successfully Mar 12 04:08:32.464887 kubelet[2487]: I0312 04:08:32.464755 2487 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 12 04:08:32.466469 kubelet[2487]: E0312 04:08:32.466412 2487 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 12 04:08:32.467145 kubelet[2487]: I0312 04:08:32.467110 2487 factory.go:223] Registration of the containerd container factory successfully Mar 12 04:08:32.502993 kubelet[2487]: I0312 04:08:32.502909 2487 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 12 04:08:32.507874 kubelet[2487]: I0312 04:08:32.507848 2487 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 12 04:08:32.507987 kubelet[2487]: I0312 04:08:32.507909 2487 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 12 04:08:32.507987 kubelet[2487]: I0312 04:08:32.507961 2487 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 12 04:08:32.508111 kubelet[2487]: I0312 04:08:32.507989 2487 kubelet.go:2436] "Starting kubelet main sync loop" Mar 12 04:08:32.508111 kubelet[2487]: E0312 04:08:32.508059 2487 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 04:08:32.511349 kubelet[2487]: E0312 04:08:32.511188 2487 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.244.26.218:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.26.218:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 12 04:08:32.521499 kubelet[2487]: I0312 04:08:32.521469 2487 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 12 04:08:32.521682 kubelet[2487]: I0312 04:08:32.521495 2487 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 12 04:08:32.521682 kubelet[2487]: I0312 04:08:32.521613 2487 state_mem.go:36] "Initialized new in-memory state store" Mar 12 04:08:32.524216 kubelet[2487]: I0312 04:08:32.524194 2487 policy_none.go:49] "None policy: Start" Mar 12 04:08:32.524328 kubelet[2487]: I0312 04:08:32.524236 2487 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 12 04:08:32.524328 kubelet[2487]: I0312 04:08:32.524270 2487 state_mem.go:35] "Initializing new in-memory state store" Mar 12 04:08:32.532209 kubelet[2487]: E0312 04:08:32.531582 2487 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 12 04:08:32.532209 kubelet[2487]: I0312 04:08:32.531864 2487 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 12 04:08:32.532209 kubelet[2487]: I0312 04:08:32.531894 2487 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 04:08:32.535204 kubelet[2487]: I0312 04:08:32.535183 2487 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 12 04:08:32.537278 kubelet[2487]: E0312 04:08:32.537250 2487 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 12 04:08:32.537380 kubelet[2487]: E0312 04:08:32.537351 2487 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-faxgs.gb1.brightbox.com\" not found" Mar 12 04:08:32.623246 kubelet[2487]: E0312 04:08:32.623081 2487 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-faxgs.gb1.brightbox.com\" not found" node="srv-faxgs.gb1.brightbox.com" Mar 12 04:08:32.636668 kubelet[2487]: E0312 04:08:32.633346 2487 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-faxgs.gb1.brightbox.com\" not found" node="srv-faxgs.gb1.brightbox.com" Mar 12 04:08:32.645036 kubelet[2487]: E0312 04:08:32.644967 2487 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-faxgs.gb1.brightbox.com\" not found" node="srv-faxgs.gb1.brightbox.com" Mar 12 04:08:32.645432 kubelet[2487]: I0312 04:08:32.645398 2487 kubelet_node_status.go:75] "Attempting to register node" node="srv-faxgs.gb1.brightbox.com" Mar 12 04:08:32.645974 kubelet[2487]: E0312 04:08:32.645927 2487 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.26.218:6443/api/v1/nodes\": dial tcp 10.244.26.218:6443: connect: connection refused" node="srv-faxgs.gb1.brightbox.com" Mar 12 04:08:32.658210 kubelet[2487]: I0312 04:08:32.658152 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/65b4b0e09bf51f09a93f1f1600f7a38f-ca-certs\") pod \"kube-controller-manager-srv-faxgs.gb1.brightbox.com\" (UID: \"65b4b0e09bf51f09a93f1f1600f7a38f\") " pod="kube-system/kube-controller-manager-srv-faxgs.gb1.brightbox.com" Mar 12 04:08:32.658210 kubelet[2487]: I0312 04:08:32.658219 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/65b4b0e09bf51f09a93f1f1600f7a38f-flexvolume-dir\") pod \"kube-controller-manager-srv-faxgs.gb1.brightbox.com\" (UID: \"65b4b0e09bf51f09a93f1f1600f7a38f\") " pod="kube-system/kube-controller-manager-srv-faxgs.gb1.brightbox.com" Mar 12 04:08:32.658487 kubelet[2487]: I0312 04:08:32.658257 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/65b4b0e09bf51f09a93f1f1600f7a38f-kubeconfig\") pod \"kube-controller-manager-srv-faxgs.gb1.brightbox.com\" (UID: \"65b4b0e09bf51f09a93f1f1600f7a38f\") " pod="kube-system/kube-controller-manager-srv-faxgs.gb1.brightbox.com" Mar 12 04:08:32.658487 kubelet[2487]: I0312 04:08:32.658288 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d22b9ea7670b4cf71413c025a0ad58c7-kubeconfig\") pod \"kube-scheduler-srv-faxgs.gb1.brightbox.com\" (UID: \"d22b9ea7670b4cf71413c025a0ad58c7\") " pod="kube-system/kube-scheduler-srv-faxgs.gb1.brightbox.com" Mar 12 04:08:32.658487 kubelet[2487]: I0312 04:08:32.658319 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4ac8a32e1e464ea551d77795cd50ba57-k8s-certs\") pod \"kube-apiserver-srv-faxgs.gb1.brightbox.com\" (UID: \"4ac8a32e1e464ea551d77795cd50ba57\") " pod="kube-system/kube-apiserver-srv-faxgs.gb1.brightbox.com" Mar 12 04:08:32.658487 kubelet[2487]: I0312 04:08:32.658352 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/65b4b0e09bf51f09a93f1f1600f7a38f-k8s-certs\") pod \"kube-controller-manager-srv-faxgs.gb1.brightbox.com\" (UID: \"65b4b0e09bf51f09a93f1f1600f7a38f\") " pod="kube-system/kube-controller-manager-srv-faxgs.gb1.brightbox.com" Mar 12 04:08:32.658487 kubelet[2487]: I0312 04:08:32.658382 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/65b4b0e09bf51f09a93f1f1600f7a38f-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-faxgs.gb1.brightbox.com\" (UID: \"65b4b0e09bf51f09a93f1f1600f7a38f\") " pod="kube-system/kube-controller-manager-srv-faxgs.gb1.brightbox.com" Mar 12 04:08:32.658842 kubelet[2487]: I0312 04:08:32.658411 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4ac8a32e1e464ea551d77795cd50ba57-ca-certs\") pod \"kube-apiserver-srv-faxgs.gb1.brightbox.com\" (UID: \"4ac8a32e1e464ea551d77795cd50ba57\") " pod="kube-system/kube-apiserver-srv-faxgs.gb1.brightbox.com" Mar 12 04:08:32.658842 kubelet[2487]: I0312 04:08:32.658442 2487 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4ac8a32e1e464ea551d77795cd50ba57-usr-share-ca-certificates\") pod \"kube-apiserver-srv-faxgs.gb1.brightbox.com\" (UID: \"4ac8a32e1e464ea551d77795cd50ba57\") " pod="kube-system/kube-apiserver-srv-faxgs.gb1.brightbox.com" Mar 12 04:08:32.660460 kubelet[2487]: E0312 04:08:32.660413 2487 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.26.218:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-faxgs.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.26.218:6443: connect: connection refused" interval="400ms" Mar 12 04:08:32.849681 kubelet[2487]: I0312 04:08:32.849114 2487 kubelet_node_status.go:75] "Attempting to register node" node="srv-faxgs.gb1.brightbox.com" Mar 12 04:08:32.849681 kubelet[2487]: E0312 04:08:32.849548 2487 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.26.218:6443/api/v1/nodes\": dial tcp 10.244.26.218:6443: connect: connection refused" node="srv-faxgs.gb1.brightbox.com" Mar 12 04:08:32.925753 containerd[1621]: time="2026-03-12T04:08:32.925082681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-faxgs.gb1.brightbox.com,Uid:d22b9ea7670b4cf71413c025a0ad58c7,Namespace:kube-system,Attempt:0,}" Mar 12 04:08:32.938191 containerd[1621]: time="2026-03-12T04:08:32.938079858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-faxgs.gb1.brightbox.com,Uid:4ac8a32e1e464ea551d77795cd50ba57,Namespace:kube-system,Attempt:0,}" Mar 12 04:08:32.946598 containerd[1621]: time="2026-03-12T04:08:32.946303722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-faxgs.gb1.brightbox.com,Uid:65b4b0e09bf51f09a93f1f1600f7a38f,Namespace:kube-system,Attempt:0,}" Mar 12 04:08:33.062035 kubelet[2487]: E0312 04:08:33.061979 2487 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.26.218:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-faxgs.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.26.218:6443: connect: connection refused" interval="800ms" Mar 12 04:08:33.258590 kubelet[2487]: I0312 04:08:33.257918 2487 kubelet_node_status.go:75] "Attempting to register node" node="srv-faxgs.gb1.brightbox.com" Mar 12 04:08:33.258590 kubelet[2487]: E0312 04:08:33.258401 2487 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.26.218:6443/api/v1/nodes\": dial tcp 10.244.26.218:6443: connect: connection refused" node="srv-faxgs.gb1.brightbox.com" Mar 12 04:08:33.431840 kubelet[2487]: E0312 04:08:33.431477 2487 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.244.26.218:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.26.218:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 12 04:08:33.506136 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2248308871.mount: Deactivated successfully. Mar 12 04:08:33.514061 containerd[1621]: time="2026-03-12T04:08:33.512728614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 04:08:33.515447 containerd[1621]: time="2026-03-12T04:08:33.515403986Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Mar 12 04:08:33.520364 containerd[1621]: time="2026-03-12T04:08:33.520314297Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 04:08:33.521863 containerd[1621]: time="2026-03-12T04:08:33.521820590Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 12 04:08:33.522120 containerd[1621]: time="2026-03-12T04:08:33.522087895Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 04:08:33.523274 containerd[1621]: time="2026-03-12T04:08:33.523238152Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 04:08:33.526316 containerd[1621]: time="2026-03-12T04:08:33.526274842Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 12 04:08:33.528191 containerd[1621]: time="2026-03-12T04:08:33.528140027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 04:08:33.533762 containerd[1621]: time="2026-03-12T04:08:33.533721144Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 587.339357ms" Mar 12 04:08:33.537629 containerd[1621]: time="2026-03-12T04:08:33.537593494Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 599.342466ms" Mar 12 04:08:33.538066 containerd[1621]: time="2026-03-12T04:08:33.537889130Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 612.672801ms" Mar 12 04:08:33.567599 kubelet[2487]: E0312 04:08:33.567255 2487 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.244.26.218:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-faxgs.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.26.218:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 12 04:08:33.738960 containerd[1621]: time="2026-03-12T04:08:33.738417313Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 04:08:33.738960 containerd[1621]: time="2026-03-12T04:08:33.738489138Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 04:08:33.738960 containerd[1621]: time="2026-03-12T04:08:33.738508585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 04:08:33.738960 containerd[1621]: time="2026-03-12T04:08:33.738683054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 04:08:33.738960 containerd[1621]: time="2026-03-12T04:08:33.738372594Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 04:08:33.738960 containerd[1621]: time="2026-03-12T04:08:33.738482356Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 04:08:33.738960 containerd[1621]: time="2026-03-12T04:08:33.738505768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 04:08:33.738960 containerd[1621]: time="2026-03-12T04:08:33.738676681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 04:08:33.743767 containerd[1621]: time="2026-03-12T04:08:33.743345495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 04:08:33.743767 containerd[1621]: time="2026-03-12T04:08:33.743436649Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 04:08:33.743767 containerd[1621]: time="2026-03-12T04:08:33.743458331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 04:08:33.743767 containerd[1621]: time="2026-03-12T04:08:33.743614254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 04:08:33.801047 kubelet[2487]: E0312 04:08:33.800753 2487 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.244.26.218:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.26.218:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 12 04:08:33.816455 kubelet[2487]: E0312 04:08:33.815784 2487 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.244.26.218:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.26.218:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 12 04:08:33.863643 kubelet[2487]: E0312 04:08:33.863537 2487 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.26.218:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-faxgs.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.26.218:6443: connect: connection refused" interval="1.6s" Mar 12 04:08:33.901965 containerd[1621]: time="2026-03-12T04:08:33.899311565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-faxgs.gb1.brightbox.com,Uid:65b4b0e09bf51f09a93f1f1600f7a38f,Namespace:kube-system,Attempt:0,} returns sandbox id \"c994acf5b6542b8faa727fb457f6e52383b0bf7d7e433bcf64196bd79a54c7de\"" Mar 12 04:08:33.917259 containerd[1621]: time="2026-03-12T04:08:33.917099538Z" level=info msg="CreateContainer within sandbox \"c994acf5b6542b8faa727fb457f6e52383b0bf7d7e433bcf64196bd79a54c7de\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 12 04:08:33.936854 containerd[1621]: time="2026-03-12T04:08:33.936805535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-faxgs.gb1.brightbox.com,Uid:d22b9ea7670b4cf71413c025a0ad58c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"5841f162134439fff7f34650e5219b4e572e575c6d1c4843fc4b46c88865aa1c\"" Mar 12 04:08:33.937618 containerd[1621]: time="2026-03-12T04:08:33.936966404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-faxgs.gb1.brightbox.com,Uid:4ac8a32e1e464ea551d77795cd50ba57,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7b8e3f5c5a9c9cf4862f8cf5fa379512a0ab9269369c900f77775e604744a9f\"" Mar 12 04:08:33.945357 containerd[1621]: time="2026-03-12T04:08:33.945291695Z" level=info msg="CreateContainer within sandbox \"a7b8e3f5c5a9c9cf4862f8cf5fa379512a0ab9269369c900f77775e604744a9f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 12 04:08:33.946873 containerd[1621]: time="2026-03-12T04:08:33.946335112Z" level=info msg="CreateContainer within sandbox \"c994acf5b6542b8faa727fb457f6e52383b0bf7d7e433bcf64196bd79a54c7de\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"78e204e25c93ccc84790bdc0b9c32e371494e503892963ff87beeaff1a76479f\"" Mar 12 04:08:33.947690 containerd[1621]: time="2026-03-12T04:08:33.947643771Z" level=info msg="CreateContainer within sandbox \"5841f162134439fff7f34650e5219b4e572e575c6d1c4843fc4b46c88865aa1c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 12 04:08:33.948022 containerd[1621]: time="2026-03-12T04:08:33.947992269Z" level=info msg="StartContainer for \"78e204e25c93ccc84790bdc0b9c32e371494e503892963ff87beeaff1a76479f\"" Mar 12 04:08:33.971408 containerd[1621]: time="2026-03-12T04:08:33.971222967Z" level=info msg="CreateContainer within sandbox \"a7b8e3f5c5a9c9cf4862f8cf5fa379512a0ab9269369c900f77775e604744a9f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"431085ffb4875c6fa48b0c2285e24d4a613d9d2a00a9f7370ca32cac1f8e3144\"" Mar 12 04:08:33.973963 containerd[1621]: time="2026-03-12T04:08:33.973916225Z" level=info msg="StartContainer for \"431085ffb4875c6fa48b0c2285e24d4a613d9d2a00a9f7370ca32cac1f8e3144\"" Mar 12 04:08:33.983796 containerd[1621]: time="2026-03-12T04:08:33.983723633Z" level=info msg="CreateContainer within sandbox \"5841f162134439fff7f34650e5219b4e572e575c6d1c4843fc4b46c88865aa1c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c4a86062e54cff319d1ea9389b38c18f5b4fabfee21dd5fa5825a060866d3745\"" Mar 12 04:08:33.984694 containerd[1621]: time="2026-03-12T04:08:33.984267088Z" level=info msg="StartContainer for \"c4a86062e54cff319d1ea9389b38c18f5b4fabfee21dd5fa5825a060866d3745\"" Mar 12 04:08:34.070062 kubelet[2487]: I0312 04:08:34.069944 2487 kubelet_node_status.go:75] "Attempting to register node" node="srv-faxgs.gb1.brightbox.com" Mar 12 04:08:34.072583 kubelet[2487]: E0312 04:08:34.071082 2487 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.26.218:6443/api/v1/nodes\": dial tcp 10.244.26.218:6443: connect: connection refused" node="srv-faxgs.gb1.brightbox.com" Mar 12 04:08:34.102924 containerd[1621]: time="2026-03-12T04:08:34.102837389Z" level=info msg="StartContainer for \"78e204e25c93ccc84790bdc0b9c32e371494e503892963ff87beeaff1a76479f\" returns successfully" Mar 12 04:08:34.155379 containerd[1621]: time="2026-03-12T04:08:34.153752507Z" level=info msg="StartContainer for \"431085ffb4875c6fa48b0c2285e24d4a613d9d2a00a9f7370ca32cac1f8e3144\" returns successfully" Mar 12 04:08:34.169850 containerd[1621]: time="2026-03-12T04:08:34.169646529Z" level=info msg="StartContainer for \"c4a86062e54cff319d1ea9389b38c18f5b4fabfee21dd5fa5825a060866d3745\" returns successfully" Mar 12 04:08:34.406615 kubelet[2487]: E0312 04:08:34.406441 2487 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.244.26.218:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.26.218:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 12 04:08:34.533405 kubelet[2487]: E0312 04:08:34.533364 2487 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-faxgs.gb1.brightbox.com\" not found" node="srv-faxgs.gb1.brightbox.com" Mar 12 04:08:34.538207 kubelet[2487]: E0312 04:08:34.538172 2487 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-faxgs.gb1.brightbox.com\" not found" node="srv-faxgs.gb1.brightbox.com" Mar 12 04:08:34.547447 kubelet[2487]: E0312 04:08:34.547395 2487 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-faxgs.gb1.brightbox.com\" not found" node="srv-faxgs.gb1.brightbox.com" Mar 12 04:08:35.552072 kubelet[2487]: E0312 04:08:35.552029 2487 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-faxgs.gb1.brightbox.com\" not found" node="srv-faxgs.gb1.brightbox.com" Mar 12 04:08:35.556010 kubelet[2487]: E0312 04:08:35.554449 2487 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-faxgs.gb1.brightbox.com\" not found" node="srv-faxgs.gb1.brightbox.com" Mar 12 04:08:35.556010 kubelet[2487]: E0312 04:08:35.555760 2487 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-faxgs.gb1.brightbox.com\" not found" node="srv-faxgs.gb1.brightbox.com" Mar 12 04:08:35.678970 kubelet[2487]: I0312 04:08:35.678928 2487 kubelet_node_status.go:75] "Attempting to register node" node="srv-faxgs.gb1.brightbox.com" Mar 12 04:08:36.558104 kubelet[2487]: E0312 04:08:36.558064 2487 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-faxgs.gb1.brightbox.com\" not found" node="srv-faxgs.gb1.brightbox.com" Mar 12 04:08:37.613931 kubelet[2487]: E0312 04:08:37.613875 2487 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-faxgs.gb1.brightbox.com\" not found" node="srv-faxgs.gb1.brightbox.com" Mar 12 04:08:37.704077 kubelet[2487]: I0312 04:08:37.704016 2487 kubelet_node_status.go:78] "Successfully registered node" node="srv-faxgs.gb1.brightbox.com" Mar 12 04:08:37.704077 kubelet[2487]: E0312 04:08:37.704070 2487 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"srv-faxgs.gb1.brightbox.com\": node \"srv-faxgs.gb1.brightbox.com\" not found" Mar 12 04:08:37.760119 kubelet[2487]: I0312 04:08:37.760057 2487 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-faxgs.gb1.brightbox.com" Mar 12 04:08:37.771021 kubelet[2487]: E0312 04:08:37.770933 2487 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-faxgs.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-faxgs.gb1.brightbox.com" Mar 12 04:08:37.771021 kubelet[2487]: I0312 04:08:37.770987 2487 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-faxgs.gb1.brightbox.com" Mar 12 04:08:37.775688 kubelet[2487]: E0312 04:08:37.774510 2487 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-faxgs.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-faxgs.gb1.brightbox.com" Mar 12 04:08:37.775688 kubelet[2487]: I0312 04:08:37.774580 2487 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-faxgs.gb1.brightbox.com" Mar 12 04:08:37.776663 kubelet[2487]: E0312 04:08:37.776627 2487 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-faxgs.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-faxgs.gb1.brightbox.com" Mar 12 04:08:38.427646 kubelet[2487]: I0312 04:08:38.427553 2487 apiserver.go:52] "Watching apiserver" Mar 12 04:08:38.458540 kubelet[2487]: I0312 04:08:38.458292 2487 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 12 04:08:40.061421 systemd[1]: Reloading requested from client PID 2775 ('systemctl') (unit session-11.scope)... Mar 12 04:08:40.061451 systemd[1]: Reloading... Mar 12 04:08:40.186739 zram_generator::config[2820]: No configuration found. Mar 12 04:08:40.378811 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 04:08:40.502374 systemd[1]: Reloading finished in 440 ms. Mar 12 04:08:40.557407 kubelet[2487]: I0312 04:08:40.557358 2487 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 12 04:08:40.558072 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 04:08:40.573447 systemd[1]: kubelet.service: Deactivated successfully. Mar 12 04:08:40.574149 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 04:08:40.586153 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 04:08:40.793857 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 04:08:40.813473 (kubelet)[2888]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 12 04:08:40.967991 kubelet[2888]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 04:08:40.967991 kubelet[2888]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 12 04:08:40.967991 kubelet[2888]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 04:08:40.968797 kubelet[2888]: I0312 04:08:40.968703 2888 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 12 04:08:40.980220 kubelet[2888]: I0312 04:08:40.980185 2888 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 12 04:08:40.982179 kubelet[2888]: I0312 04:08:40.980338 2888 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 04:08:40.982179 kubelet[2888]: I0312 04:08:40.980676 2888 server.go:956] "Client rotation is on, will bootstrap in background" Mar 12 04:08:40.982477 kubelet[2888]: I0312 04:08:40.982443 2888 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 12 04:08:40.991634 kubelet[2888]: I0312 04:08:40.991604 2888 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 12 04:08:40.998156 kubelet[2888]: E0312 04:08:40.998105 2888 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 12 04:08:40.998156 kubelet[2888]: I0312 04:08:40.998152 2888 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 12 04:08:41.003645 kubelet[2888]: I0312 04:08:41.003596 2888 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 12 04:08:41.005574 kubelet[2888]: I0312 04:08:41.004193 2888 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 04:08:41.005574 kubelet[2888]: I0312 04:08:41.004250 2888 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-faxgs.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 12 04:08:41.005574 kubelet[2888]: I0312 04:08:41.004600 2888 topology_manager.go:138] "Creating topology manager with none policy" Mar 12 04:08:41.005574 kubelet[2888]: I0312 04:08:41.004618 2888 container_manager_linux.go:303] "Creating device plugin manager" Mar 12 04:08:41.005574 kubelet[2888]: I0312 04:08:41.004685 2888 state_mem.go:36] "Initialized new in-memory state store" Mar 12 04:08:41.005884 kubelet[2888]: I0312 04:08:41.004951 2888 kubelet.go:480] "Attempting to sync node with API server" Mar 12 04:08:41.005884 kubelet[2888]: I0312 04:08:41.004971 2888 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 04:08:41.005884 kubelet[2888]: I0312 04:08:41.005004 2888 kubelet.go:386] "Adding apiserver pod source" Mar 12 04:08:41.005884 kubelet[2888]: I0312 04:08:41.005031 2888 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 04:08:41.013579 kubelet[2888]: I0312 04:08:41.012287 2888 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 12 04:08:41.013579 kubelet[2888]: I0312 04:08:41.013222 2888 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 12 04:08:41.019159 kubelet[2888]: I0312 04:08:41.019131 2888 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 12 04:08:41.019259 kubelet[2888]: I0312 04:08:41.019217 2888 server.go:1289] "Started kubelet" Mar 12 04:08:41.024819 kubelet[2888]: I0312 04:08:41.023749 2888 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 12 04:08:41.040550 kubelet[2888]: I0312 04:08:41.039969 2888 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 04:08:41.049906 kubelet[2888]: I0312 04:08:41.049783 2888 server.go:317] "Adding debug handlers to kubelet server" Mar 12 04:08:41.069840 kubelet[2888]: I0312 04:08:41.050347 2888 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 04:08:41.072711 kubelet[2888]: I0312 04:08:41.057697 2888 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 12 04:08:41.080986 kubelet[2888]: I0312 04:08:41.066688 2888 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 12 04:08:41.082876 kubelet[2888]: I0312 04:08:41.066716 2888 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 12 04:08:41.083806 kubelet[2888]: I0312 04:08:41.083785 2888 reconciler.go:26] "Reconciler: start to sync state" Mar 12 04:08:41.086460 kubelet[2888]: I0312 04:08:41.086386 2888 factory.go:223] Registration of the systemd container factory successfully Mar 12 04:08:41.090337 kubelet[2888]: I0312 04:08:41.090122 2888 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 12 04:08:41.092110 kubelet[2888]: I0312 04:08:41.092077 2888 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 04:08:41.093317 kubelet[2888]: E0312 04:08:41.092337 2888 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 12 04:08:41.099779 kubelet[2888]: I0312 04:08:41.099748 2888 factory.go:223] Registration of the containerd container factory successfully Mar 12 04:08:41.107712 sudo[2907]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 12 04:08:41.108337 sudo[2907]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 12 04:08:41.193229 kubelet[2888]: I0312 04:08:41.193154 2888 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 12 04:08:41.205530 kubelet[2888]: I0312 04:08:41.205378 2888 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 12 04:08:41.205530 kubelet[2888]: I0312 04:08:41.205423 2888 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 12 04:08:41.205530 kubelet[2888]: I0312 04:08:41.205470 2888 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 12 04:08:41.205530 kubelet[2888]: I0312 04:08:41.205517 2888 kubelet.go:2436] "Starting kubelet main sync loop" Mar 12 04:08:41.207022 kubelet[2888]: E0312 04:08:41.206391 2888 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 04:08:41.308088 kubelet[2888]: E0312 04:08:41.306688 2888 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 12 04:08:41.344729 kubelet[2888]: I0312 04:08:41.344697 2888 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 12 04:08:41.345053 kubelet[2888]: I0312 04:08:41.345026 2888 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 12 04:08:41.345314 kubelet[2888]: I0312 04:08:41.345286 2888 state_mem.go:36] "Initialized new in-memory state store" Mar 12 04:08:41.345664 kubelet[2888]: I0312 04:08:41.345640 2888 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 12 04:08:41.345818 kubelet[2888]: I0312 04:08:41.345780 2888 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 12 04:08:41.348037 kubelet[2888]: I0312 04:08:41.346342 2888 policy_none.go:49] "None policy: Start" Mar 12 04:08:41.348037 kubelet[2888]: I0312 04:08:41.346378 2888 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 12 04:08:41.348037 kubelet[2888]: I0312 04:08:41.346404 2888 state_mem.go:35] "Initializing new in-memory state store" Mar 12 04:08:41.348037 kubelet[2888]: I0312 04:08:41.346601 2888 state_mem.go:75] "Updated machine memory state" Mar 12 04:08:41.352738 kubelet[2888]: E0312 04:08:41.352708 2888 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 12 04:08:41.353997 kubelet[2888]: I0312 04:08:41.353975 2888 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 12 04:08:41.354661 kubelet[2888]: I0312 04:08:41.354595 2888 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 04:08:41.355316 kubelet[2888]: I0312 04:08:41.355295 2888 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 12 04:08:41.359360 kubelet[2888]: E0312 04:08:41.359331 2888 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 12 04:08:41.476358 kubelet[2888]: I0312 04:08:41.476319 2888 kubelet_node_status.go:75] "Attempting to register node" node="srv-faxgs.gb1.brightbox.com" Mar 12 04:08:41.492790 kubelet[2888]: I0312 04:08:41.492137 2888 kubelet_node_status.go:124] "Node was previously registered" node="srv-faxgs.gb1.brightbox.com" Mar 12 04:08:41.492790 kubelet[2888]: I0312 04:08:41.492401 2888 kubelet_node_status.go:78] "Successfully registered node" node="srv-faxgs.gb1.brightbox.com" Mar 12 04:08:41.509402 kubelet[2888]: I0312 04:08:41.509351 2888 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-faxgs.gb1.brightbox.com" Mar 12 04:08:41.516702 kubelet[2888]: I0312 04:08:41.515762 2888 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-faxgs.gb1.brightbox.com" Mar 12 04:08:41.520019 kubelet[2888]: I0312 04:08:41.517783 2888 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-faxgs.gb1.brightbox.com" Mar 12 04:08:41.533676 kubelet[2888]: I0312 04:08:41.533368 2888 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 12 04:08:41.534100 kubelet[2888]: I0312 04:08:41.533976 2888 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 12 04:08:41.535057 kubelet[2888]: I0312 04:08:41.535027 2888 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 12 04:08:41.591674 kubelet[2888]: I0312 04:08:41.589316 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4ac8a32e1e464ea551d77795cd50ba57-usr-share-ca-certificates\") pod \"kube-apiserver-srv-faxgs.gb1.brightbox.com\" (UID: \"4ac8a32e1e464ea551d77795cd50ba57\") " pod="kube-system/kube-apiserver-srv-faxgs.gb1.brightbox.com" Mar 12 04:08:41.591674 kubelet[2888]: I0312 04:08:41.589486 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/65b4b0e09bf51f09a93f1f1600f7a38f-ca-certs\") pod \"kube-controller-manager-srv-faxgs.gb1.brightbox.com\" (UID: \"65b4b0e09bf51f09a93f1f1600f7a38f\") " pod="kube-system/kube-controller-manager-srv-faxgs.gb1.brightbox.com" Mar 12 04:08:41.591674 kubelet[2888]: I0312 04:08:41.589524 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/65b4b0e09bf51f09a93f1f1600f7a38f-k8s-certs\") pod \"kube-controller-manager-srv-faxgs.gb1.brightbox.com\" (UID: \"65b4b0e09bf51f09a93f1f1600f7a38f\") " pod="kube-system/kube-controller-manager-srv-faxgs.gb1.brightbox.com" Mar 12 04:08:41.591674 kubelet[2888]: I0312 04:08:41.589570 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d22b9ea7670b4cf71413c025a0ad58c7-kubeconfig\") pod \"kube-scheduler-srv-faxgs.gb1.brightbox.com\" (UID: \"d22b9ea7670b4cf71413c025a0ad58c7\") " pod="kube-system/kube-scheduler-srv-faxgs.gb1.brightbox.com" Mar 12 04:08:41.591674 kubelet[2888]: I0312 04:08:41.589605 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4ac8a32e1e464ea551d77795cd50ba57-ca-certs\") pod \"kube-apiserver-srv-faxgs.gb1.brightbox.com\" (UID: \"4ac8a32e1e464ea551d77795cd50ba57\") " pod="kube-system/kube-apiserver-srv-faxgs.gb1.brightbox.com" Mar 12 04:08:41.592018 kubelet[2888]: I0312 04:08:41.589634 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4ac8a32e1e464ea551d77795cd50ba57-k8s-certs\") pod \"kube-apiserver-srv-faxgs.gb1.brightbox.com\" (UID: \"4ac8a32e1e464ea551d77795cd50ba57\") " pod="kube-system/kube-apiserver-srv-faxgs.gb1.brightbox.com" Mar 12 04:08:41.592018 kubelet[2888]: I0312 04:08:41.589662 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/65b4b0e09bf51f09a93f1f1600f7a38f-flexvolume-dir\") pod \"kube-controller-manager-srv-faxgs.gb1.brightbox.com\" (UID: \"65b4b0e09bf51f09a93f1f1600f7a38f\") " pod="kube-system/kube-controller-manager-srv-faxgs.gb1.brightbox.com" Mar 12 04:08:41.592018 kubelet[2888]: I0312 04:08:41.589697 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/65b4b0e09bf51f09a93f1f1600f7a38f-kubeconfig\") pod \"kube-controller-manager-srv-faxgs.gb1.brightbox.com\" (UID: \"65b4b0e09bf51f09a93f1f1600f7a38f\") " pod="kube-system/kube-controller-manager-srv-faxgs.gb1.brightbox.com" Mar 12 04:08:41.592018 kubelet[2888]: I0312 04:08:41.589729 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/65b4b0e09bf51f09a93f1f1600f7a38f-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-faxgs.gb1.brightbox.com\" (UID: \"65b4b0e09bf51f09a93f1f1600f7a38f\") " pod="kube-system/kube-controller-manager-srv-faxgs.gb1.brightbox.com" Mar 12 04:08:41.985930 sudo[2907]: pam_unix(sudo:session): session closed for user root Mar 12 04:08:42.009616 kubelet[2888]: I0312 04:08:42.009136 2888 apiserver.go:52] "Watching apiserver" Mar 12 04:08:42.083909 kubelet[2888]: I0312 04:08:42.083825 2888 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 12 04:08:42.312542 kubelet[2888]: I0312 04:08:42.312105 2888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-faxgs.gb1.brightbox.com" podStartSLOduration=1.3120696139999999 podStartE2EDuration="1.312069614s" podCreationTimestamp="2026-03-12 04:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 04:08:42.302547363 +0000 UTC m=+1.418790673" watchObservedRunningTime="2026-03-12 04:08:42.312069614 +0000 UTC m=+1.428312932" Mar 12 04:08:42.314736 kubelet[2888]: I0312 04:08:42.313643 2888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-faxgs.gb1.brightbox.com" podStartSLOduration=1.313634568 podStartE2EDuration="1.313634568s" podCreationTimestamp="2026-03-12 04:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 04:08:42.312578944 +0000 UTC m=+1.428822236" watchObservedRunningTime="2026-03-12 04:08:42.313634568 +0000 UTC m=+1.429877911" Mar 12 04:08:42.324603 kubelet[2888]: I0312 04:08:42.323999 2888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-faxgs.gb1.brightbox.com" podStartSLOduration=1.323983497 podStartE2EDuration="1.323983497s" podCreationTimestamp="2026-03-12 04:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 04:08:42.322706733 +0000 UTC m=+1.438950052" watchObservedRunningTime="2026-03-12 04:08:42.323983497 +0000 UTC m=+1.440226800" Mar 12 04:08:44.189719 sudo[1915]: pam_unix(sudo:session): session closed for user root Mar 12 04:08:44.281717 sshd[1911]: pam_unix(sshd:session): session closed for user core Mar 12 04:08:44.288460 systemd[1]: sshd@8-10.244.26.218:22-20.161.92.111:56344.service: Deactivated successfully. Mar 12 04:08:44.292181 systemd-logind[1599]: Session 11 logged out. Waiting for processes to exit. Mar 12 04:08:44.293343 systemd[1]: session-11.scope: Deactivated successfully. Mar 12 04:08:44.296796 systemd-logind[1599]: Removed session 11. Mar 12 04:08:45.843522 kubelet[2888]: I0312 04:08:45.843210 2888 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 12 04:08:45.844936 kubelet[2888]: I0312 04:08:45.844402 2888 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 12 04:08:45.844996 containerd[1621]: time="2026-03-12T04:08:45.843879937Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 12 04:08:46.625770 kubelet[2888]: I0312 04:08:46.625469 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/707e1c09-aad2-486b-b633-b701820513f1-lib-modules\") pod \"kube-proxy-znbmq\" (UID: \"707e1c09-aad2-486b-b633-b701820513f1\") " pod="kube-system/kube-proxy-znbmq" Mar 12 04:08:46.625770 kubelet[2888]: I0312 04:08:46.625534 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d22f7\" (UniqueName: \"kubernetes.io/projected/707e1c09-aad2-486b-b633-b701820513f1-kube-api-access-d22f7\") pod \"kube-proxy-znbmq\" (UID: \"707e1c09-aad2-486b-b633-b701820513f1\") " pod="kube-system/kube-proxy-znbmq" Mar 12 04:08:46.625770 kubelet[2888]: I0312 04:08:46.625580 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-hostproc\") pod \"cilium-p27cv\" (UID: \"2091e2aa-66d7-4a1d-806f-d6cc78c18cc4\") " pod="kube-system/cilium-p27cv" Mar 12 04:08:46.625770 kubelet[2888]: I0312 04:08:46.625610 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-cni-path\") pod \"cilium-p27cv\" (UID: \"2091e2aa-66d7-4a1d-806f-d6cc78c18cc4\") " pod="kube-system/cilium-p27cv" Mar 12 04:08:46.625770 kubelet[2888]: I0312 04:08:46.625641 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-etc-cni-netd\") pod \"cilium-p27cv\" (UID: \"2091e2aa-66d7-4a1d-806f-d6cc78c18cc4\") " pod="kube-system/cilium-p27cv" Mar 12 04:08:46.625770 kubelet[2888]: I0312 04:08:46.625667 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-cilium-config-path\") pod \"cilium-p27cv\" (UID: \"2091e2aa-66d7-4a1d-806f-d6cc78c18cc4\") " pod="kube-system/cilium-p27cv" Mar 12 04:08:46.627829 kubelet[2888]: I0312 04:08:46.625714 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-host-proc-sys-net\") pod \"cilium-p27cv\" (UID: \"2091e2aa-66d7-4a1d-806f-d6cc78c18cc4\") " pod="kube-system/cilium-p27cv" Mar 12 04:08:46.627829 kubelet[2888]: I0312 04:08:46.625768 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/707e1c09-aad2-486b-b633-b701820513f1-xtables-lock\") pod \"kube-proxy-znbmq\" (UID: \"707e1c09-aad2-486b-b633-b701820513f1\") " pod="kube-system/kube-proxy-znbmq" Mar 12 04:08:46.627829 kubelet[2888]: I0312 04:08:46.625816 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-cilium-run\") pod \"cilium-p27cv\" (UID: \"2091e2aa-66d7-4a1d-806f-d6cc78c18cc4\") " pod="kube-system/cilium-p27cv" Mar 12 04:08:46.627829 kubelet[2888]: I0312 04:08:46.625847 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-bpf-maps\") pod \"cilium-p27cv\" (UID: \"2091e2aa-66d7-4a1d-806f-d6cc78c18cc4\") " pod="kube-system/cilium-p27cv" Mar 12 04:08:46.627829 kubelet[2888]: I0312 04:08:46.625884 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-cilium-cgroup\") pod \"cilium-p27cv\" (UID: \"2091e2aa-66d7-4a1d-806f-d6cc78c18cc4\") " pod="kube-system/cilium-p27cv" Mar 12 04:08:46.627829 kubelet[2888]: I0312 04:08:46.625917 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-xtables-lock\") pod \"cilium-p27cv\" (UID: \"2091e2aa-66d7-4a1d-806f-d6cc78c18cc4\") " pod="kube-system/cilium-p27cv" Mar 12 04:08:46.628097 kubelet[2888]: I0312 04:08:46.625983 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/707e1c09-aad2-486b-b633-b701820513f1-kube-proxy\") pod \"kube-proxy-znbmq\" (UID: \"707e1c09-aad2-486b-b633-b701820513f1\") " pod="kube-system/kube-proxy-znbmq" Mar 12 04:08:46.628097 kubelet[2888]: I0312 04:08:46.626019 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-lib-modules\") pod \"cilium-p27cv\" (UID: \"2091e2aa-66d7-4a1d-806f-d6cc78c18cc4\") " pod="kube-system/cilium-p27cv" Mar 12 04:08:46.628097 kubelet[2888]: I0312 04:08:46.626081 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-host-proc-sys-kernel\") pod \"cilium-p27cv\" (UID: \"2091e2aa-66d7-4a1d-806f-d6cc78c18cc4\") " pod="kube-system/cilium-p27cv" Mar 12 04:08:46.628097 kubelet[2888]: I0312 04:08:46.626120 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-clustermesh-secrets\") pod \"cilium-p27cv\" (UID: \"2091e2aa-66d7-4a1d-806f-d6cc78c18cc4\") " pod="kube-system/cilium-p27cv" Mar 12 04:08:46.628097 kubelet[2888]: I0312 04:08:46.626158 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-hubble-tls\") pod \"cilium-p27cv\" (UID: \"2091e2aa-66d7-4a1d-806f-d6cc78c18cc4\") " pod="kube-system/cilium-p27cv" Mar 12 04:08:46.628319 kubelet[2888]: I0312 04:08:46.626189 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9llj\" (UniqueName: \"kubernetes.io/projected/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-kube-api-access-d9llj\") pod \"cilium-p27cv\" (UID: \"2091e2aa-66d7-4a1d-806f-d6cc78c18cc4\") " pod="kube-system/cilium-p27cv" Mar 12 04:08:46.837531 containerd[1621]: time="2026-03-12T04:08:46.837457078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p27cv,Uid:2091e2aa-66d7-4a1d-806f-d6cc78c18cc4,Namespace:kube-system,Attempt:0,}" Mar 12 04:08:46.841706 containerd[1621]: time="2026-03-12T04:08:46.841618691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-znbmq,Uid:707e1c09-aad2-486b-b633-b701820513f1,Namespace:kube-system,Attempt:0,}" Mar 12 04:08:46.907844 containerd[1621]: time="2026-03-12T04:08:46.906611742Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 04:08:46.912213 containerd[1621]: time="2026-03-12T04:08:46.912033094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 04:08:46.912213 containerd[1621]: time="2026-03-12T04:08:46.912167172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 04:08:46.912911 containerd[1621]: time="2026-03-12T04:08:46.912779587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 04:08:46.956584 containerd[1621]: time="2026-03-12T04:08:46.955222749Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 04:08:46.956584 containerd[1621]: time="2026-03-12T04:08:46.955356577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 04:08:46.956584 containerd[1621]: time="2026-03-12T04:08:46.955399151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 04:08:46.956584 containerd[1621]: time="2026-03-12T04:08:46.955697207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 04:08:46.994617 containerd[1621]: time="2026-03-12T04:08:46.994486055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p27cv,Uid:2091e2aa-66d7-4a1d-806f-d6cc78c18cc4,Namespace:kube-system,Attempt:0,} returns sandbox id \"304b179e1afdadacc42fef490fbc35d224fd7b3f9e9dfd4e6367df67bca3688f\"" Mar 12 04:08:46.999391 containerd[1621]: time="2026-03-12T04:08:46.999234529Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 12 04:08:47.074408 containerd[1621]: time="2026-03-12T04:08:47.074148200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-znbmq,Uid:707e1c09-aad2-486b-b633-b701820513f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"d56967e519bbfd1efb676ea7cc60ef55983f89218c44853e51be3dbba8998bd4\"" Mar 12 04:08:47.090615 containerd[1621]: time="2026-03-12T04:08:47.089858034Z" level=info msg="CreateContainer within sandbox \"d56967e519bbfd1efb676ea7cc60ef55983f89218c44853e51be3dbba8998bd4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 12 04:08:47.119940 containerd[1621]: time="2026-03-12T04:08:47.119874692Z" level=info msg="CreateContainer within sandbox \"d56967e519bbfd1efb676ea7cc60ef55983f89218c44853e51be3dbba8998bd4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0e3ff1c00b5f5c2d5b26e75295f8852ff9487d98c01ba6595299b16bb84b13bc\"" Mar 12 04:08:47.121104 containerd[1621]: time="2026-03-12T04:08:47.121060957Z" level=info msg="StartContainer for \"0e3ff1c00b5f5c2d5b26e75295f8852ff9487d98c01ba6595299b16bb84b13bc\"" Mar 12 04:08:47.132259 kubelet[2888]: I0312 04:08:47.132099 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e3f0be40-3113-494c-a695-6ee69107bb34-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-n79rg\" (UID: \"e3f0be40-3113-494c-a695-6ee69107bb34\") " pod="kube-system/cilium-operator-6c4d7847fc-n79rg" Mar 12 04:08:47.132259 kubelet[2888]: I0312 04:08:47.132177 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9tfw\" (UniqueName: \"kubernetes.io/projected/e3f0be40-3113-494c-a695-6ee69107bb34-kube-api-access-j9tfw\") pod \"cilium-operator-6c4d7847fc-n79rg\" (UID: \"e3f0be40-3113-494c-a695-6ee69107bb34\") " pod="kube-system/cilium-operator-6c4d7847fc-n79rg" Mar 12 04:08:47.207928 containerd[1621]: time="2026-03-12T04:08:47.207482527Z" level=info msg="StartContainer for \"0e3ff1c00b5f5c2d5b26e75295f8852ff9487d98c01ba6595299b16bb84b13bc\" returns successfully" Mar 12 04:08:47.337023 kubelet[2888]: I0312 04:08:47.336935 2888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-znbmq" podStartSLOduration=1.336915429 podStartE2EDuration="1.336915429s" podCreationTimestamp="2026-03-12 04:08:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 04:08:47.322495346 +0000 UTC m=+6.438738656" watchObservedRunningTime="2026-03-12 04:08:47.336915429 +0000 UTC m=+6.453158734" Mar 12 04:08:47.414932 containerd[1621]: time="2026-03-12T04:08:47.414853749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-n79rg,Uid:e3f0be40-3113-494c-a695-6ee69107bb34,Namespace:kube-system,Attempt:0,}" Mar 12 04:08:47.455947 containerd[1621]: time="2026-03-12T04:08:47.455058454Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 04:08:47.455947 containerd[1621]: time="2026-03-12T04:08:47.455281341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 04:08:47.455947 containerd[1621]: time="2026-03-12T04:08:47.455313225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 04:08:47.456751 containerd[1621]: time="2026-03-12T04:08:47.456221090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 04:08:47.574781 containerd[1621]: time="2026-03-12T04:08:47.574703874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-n79rg,Uid:e3f0be40-3113-494c-a695-6ee69107bb34,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4442caf03f5634a5f20ecaa6b4c61e7572261d3e6ffde324a9a0912314a5d2e\"" Mar 12 04:08:53.450246 systemd-journald[1178]: Under memory pressure, flushing caches. Mar 12 04:08:53.442707 systemd-resolved[1518]: Under memory pressure, flushing caches. Mar 12 04:08:53.442775 systemd-resolved[1518]: Flushed all caches. Mar 12 04:08:54.046316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount394916906.mount: Deactivated successfully. Mar 12 04:08:57.334401 containerd[1621]: time="2026-03-12T04:08:57.334187115Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 04:08:57.336589 containerd[1621]: time="2026-03-12T04:08:57.336268399Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 12 04:08:57.336685 containerd[1621]: time="2026-03-12T04:08:57.336630687Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 04:08:57.340378 containerd[1621]: time="2026-03-12T04:08:57.340243746Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.340921831s" Mar 12 04:08:57.340378 containerd[1621]: time="2026-03-12T04:08:57.340294282Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 12 04:08:57.342083 containerd[1621]: time="2026-03-12T04:08:57.342017492Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 12 04:08:57.358403 containerd[1621]: time="2026-03-12T04:08:57.358355444Z" level=info msg="CreateContainer within sandbox \"304b179e1afdadacc42fef490fbc35d224fd7b3f9e9dfd4e6367df67bca3688f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 12 04:08:57.458438 containerd[1621]: time="2026-03-12T04:08:57.458265927Z" level=info msg="CreateContainer within sandbox \"304b179e1afdadacc42fef490fbc35d224fd7b3f9e9dfd4e6367df67bca3688f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6774863910a881baee677143bca98697f46e05d5ebcb08e9eaa3f519c23b6945\"" Mar 12 04:08:57.460501 containerd[1621]: time="2026-03-12T04:08:57.459325786Z" level=info msg="StartContainer for \"6774863910a881baee677143bca98697f46e05d5ebcb08e9eaa3f519c23b6945\"" Mar 12 04:08:57.643958 containerd[1621]: time="2026-03-12T04:08:57.642876178Z" level=info msg="StartContainer for \"6774863910a881baee677143bca98697f46e05d5ebcb08e9eaa3f519c23b6945\" returns successfully" Mar 12 04:08:57.909529 containerd[1621]: time="2026-03-12T04:08:57.882811330Z" level=info msg="shim disconnected" id=6774863910a881baee677143bca98697f46e05d5ebcb08e9eaa3f519c23b6945 namespace=k8s.io Mar 12 04:08:57.909529 containerd[1621]: time="2026-03-12T04:08:57.909138077Z" level=warning msg="cleaning up after shim disconnected" id=6774863910a881baee677143bca98697f46e05d5ebcb08e9eaa3f519c23b6945 namespace=k8s.io Mar 12 04:08:57.909529 containerd[1621]: time="2026-03-12T04:08:57.909182879Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 04:08:58.359271 containerd[1621]: time="2026-03-12T04:08:58.358478078Z" level=info msg="CreateContainer within sandbox \"304b179e1afdadacc42fef490fbc35d224fd7b3f9e9dfd4e6367df67bca3688f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 12 04:08:58.380554 containerd[1621]: time="2026-03-12T04:08:58.380358093Z" level=info msg="CreateContainer within sandbox \"304b179e1afdadacc42fef490fbc35d224fd7b3f9e9dfd4e6367df67bca3688f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"302e5b6e4e7249aa3dfd5279927fe0b9aed27edeb7fc072526334d07465e4336\"" Mar 12 04:08:58.382885 containerd[1621]: time="2026-03-12T04:08:58.382836711Z" level=info msg="StartContainer for \"302e5b6e4e7249aa3dfd5279927fe0b9aed27edeb7fc072526334d07465e4336\"" Mar 12 04:08:58.454925 containerd[1621]: time="2026-03-12T04:08:58.454877724Z" level=info msg="StartContainer for \"302e5b6e4e7249aa3dfd5279927fe0b9aed27edeb7fc072526334d07465e4336\" returns successfully" Mar 12 04:08:58.458295 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6774863910a881baee677143bca98697f46e05d5ebcb08e9eaa3f519c23b6945-rootfs.mount: Deactivated successfully. Mar 12 04:08:58.479633 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 12 04:08:58.480902 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 12 04:08:58.481142 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 12 04:08:58.489995 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 12 04:08:58.516587 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-302e5b6e4e7249aa3dfd5279927fe0b9aed27edeb7fc072526334d07465e4336-rootfs.mount: Deactivated successfully. Mar 12 04:08:58.518340 containerd[1621]: time="2026-03-12T04:08:58.518269301Z" level=info msg="shim disconnected" id=302e5b6e4e7249aa3dfd5279927fe0b9aed27edeb7fc072526334d07465e4336 namespace=k8s.io Mar 12 04:08:58.518511 containerd[1621]: time="2026-03-12T04:08:58.518473294Z" level=warning msg="cleaning up after shim disconnected" id=302e5b6e4e7249aa3dfd5279927fe0b9aed27edeb7fc072526334d07465e4336 namespace=k8s.io Mar 12 04:08:58.518891 containerd[1621]: time="2026-03-12T04:08:58.518861476Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 04:08:58.533082 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 12 04:08:59.184250 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2424650590.mount: Deactivated successfully. Mar 12 04:08:59.382220 containerd[1621]: time="2026-03-12T04:08:59.381441864Z" level=info msg="CreateContainer within sandbox \"304b179e1afdadacc42fef490fbc35d224fd7b3f9e9dfd4e6367df67bca3688f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 12 04:08:59.431284 containerd[1621]: time="2026-03-12T04:08:59.431217994Z" level=info msg="CreateContainer within sandbox \"304b179e1afdadacc42fef490fbc35d224fd7b3f9e9dfd4e6367df67bca3688f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"aab4c0e1ca4509cf182bec9afa8c1d0770c2ff44341bf8371cd11e0022c4a13c\"" Mar 12 04:08:59.433768 containerd[1621]: time="2026-03-12T04:08:59.433651259Z" level=info msg="StartContainer for \"aab4c0e1ca4509cf182bec9afa8c1d0770c2ff44341bf8371cd11e0022c4a13c\"" Mar 12 04:08:59.622712 containerd[1621]: time="2026-03-12T04:08:59.622665358Z" level=info msg="StartContainer for \"aab4c0e1ca4509cf182bec9afa8c1d0770c2ff44341bf8371cd11e0022c4a13c\" returns successfully" Mar 12 04:08:59.670965 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aab4c0e1ca4509cf182bec9afa8c1d0770c2ff44341bf8371cd11e0022c4a13c-rootfs.mount: Deactivated successfully. Mar 12 04:08:59.706899 containerd[1621]: time="2026-03-12T04:08:59.706825929Z" level=info msg="shim disconnected" id=aab4c0e1ca4509cf182bec9afa8c1d0770c2ff44341bf8371cd11e0022c4a13c namespace=k8s.io Mar 12 04:08:59.707440 containerd[1621]: time="2026-03-12T04:08:59.707182510Z" level=warning msg="cleaning up after shim disconnected" id=aab4c0e1ca4509cf182bec9afa8c1d0770c2ff44341bf8371cd11e0022c4a13c namespace=k8s.io Mar 12 04:08:59.707440 containerd[1621]: time="2026-03-12T04:08:59.707210068Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 04:09:00.173657 containerd[1621]: time="2026-03-12T04:09:00.173597442Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 04:09:00.175806 containerd[1621]: time="2026-03-12T04:09:00.175755802Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 12 04:09:00.179202 containerd[1621]: time="2026-03-12T04:09:00.178743132Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 04:09:00.182088 containerd[1621]: time="2026-03-12T04:09:00.182051685Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.839987592s" Mar 12 04:09:00.182207 containerd[1621]: time="2026-03-12T04:09:00.182099262Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 12 04:09:00.186775 containerd[1621]: time="2026-03-12T04:09:00.186734927Z" level=info msg="CreateContainer within sandbox \"b4442caf03f5634a5f20ecaa6b4c61e7572261d3e6ffde324a9a0912314a5d2e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 12 04:09:00.202358 containerd[1621]: time="2026-03-12T04:09:00.202077809Z" level=info msg="CreateContainer within sandbox \"b4442caf03f5634a5f20ecaa6b4c61e7572261d3e6ffde324a9a0912314a5d2e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1c243fed214260e1730542f68b61666a825921c924e68b8e4aa9decfa5498c76\"" Mar 12 04:09:00.204267 containerd[1621]: time="2026-03-12T04:09:00.203493255Z" level=info msg="StartContainer for \"1c243fed214260e1730542f68b61666a825921c924e68b8e4aa9decfa5498c76\"" Mar 12 04:09:00.378020 containerd[1621]: time="2026-03-12T04:09:00.377954586Z" level=info msg="StartContainer for \"1c243fed214260e1730542f68b61666a825921c924e68b8e4aa9decfa5498c76\" returns successfully" Mar 12 04:09:00.401529 containerd[1621]: time="2026-03-12T04:09:00.400851969Z" level=info msg="CreateContainer within sandbox \"304b179e1afdadacc42fef490fbc35d224fd7b3f9e9dfd4e6367df67bca3688f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 12 04:09:00.422597 containerd[1621]: time="2026-03-12T04:09:00.421621956Z" level=info msg="CreateContainer within sandbox \"304b179e1afdadacc42fef490fbc35d224fd7b3f9e9dfd4e6367df67bca3688f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c6ffae743a3a99223b57d115821300040b4effcbe93aec4f1877fd7d7cae17d5\"" Mar 12 04:09:00.422597 containerd[1621]: time="2026-03-12T04:09:00.422379327Z" level=info msg="StartContainer for \"c6ffae743a3a99223b57d115821300040b4effcbe93aec4f1877fd7d7cae17d5\"" Mar 12 04:09:00.454682 kubelet[2888]: I0312 04:09:00.446686 2888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-n79rg" podStartSLOduration=0.841061701 podStartE2EDuration="13.446662498s" podCreationTimestamp="2026-03-12 04:08:47 +0000 UTC" firstStartedPulling="2026-03-12 04:08:47.577331375 +0000 UTC m=+6.693574666" lastFinishedPulling="2026-03-12 04:09:00.18293216 +0000 UTC m=+19.299175463" observedRunningTime="2026-03-12 04:09:00.444753346 +0000 UTC m=+19.560996679" watchObservedRunningTime="2026-03-12 04:09:00.446662498 +0000 UTC m=+19.562905802" Mar 12 04:09:00.578616 containerd[1621]: time="2026-03-12T04:09:00.575237344Z" level=info msg="StartContainer for \"c6ffae743a3a99223b57d115821300040b4effcbe93aec4f1877fd7d7cae17d5\" returns successfully" Mar 12 04:09:00.661810 containerd[1621]: time="2026-03-12T04:09:00.661688233Z" level=info msg="shim disconnected" id=c6ffae743a3a99223b57d115821300040b4effcbe93aec4f1877fd7d7cae17d5 namespace=k8s.io Mar 12 04:09:00.662324 containerd[1621]: time="2026-03-12T04:09:00.662089328Z" level=warning msg="cleaning up after shim disconnected" id=c6ffae743a3a99223b57d115821300040b4effcbe93aec4f1877fd7d7cae17d5 namespace=k8s.io Mar 12 04:09:00.662324 containerd[1621]: time="2026-03-12T04:09:00.662116066Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 04:09:00.663494 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c6ffae743a3a99223b57d115821300040b4effcbe93aec4f1877fd7d7cae17d5-rootfs.mount: Deactivated successfully. Mar 12 04:09:01.434139 containerd[1621]: time="2026-03-12T04:09:01.433941827Z" level=info msg="CreateContainer within sandbox \"304b179e1afdadacc42fef490fbc35d224fd7b3f9e9dfd4e6367df67bca3688f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 12 04:09:01.475074 containerd[1621]: time="2026-03-12T04:09:01.474800687Z" level=info msg="CreateContainer within sandbox \"304b179e1afdadacc42fef490fbc35d224fd7b3f9e9dfd4e6367df67bca3688f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"caedbc29638c1d0d58a6287b8afd6ff6803f4c2323db9bf6becba6a19407b118\"" Mar 12 04:09:01.477488 containerd[1621]: time="2026-03-12T04:09:01.477275048Z" level=info msg="StartContainer for \"caedbc29638c1d0d58a6287b8afd6ff6803f4c2323db9bf6becba6a19407b118\"" Mar 12 04:09:01.632600 containerd[1621]: time="2026-03-12T04:09:01.632380048Z" level=info msg="StartContainer for \"caedbc29638c1d0d58a6287b8afd6ff6803f4c2323db9bf6becba6a19407b118\" returns successfully" Mar 12 04:09:01.862414 kubelet[2888]: I0312 04:09:01.862367 2888 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 12 04:09:01.995689 kubelet[2888]: I0312 04:09:01.995408 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk8f4\" (UniqueName: \"kubernetes.io/projected/2c280d7f-b575-4f0d-a02d-c9776d9b274a-kube-api-access-bk8f4\") pod \"coredns-674b8bbfcf-gnqm2\" (UID: \"2c280d7f-b575-4f0d-a02d-c9776d9b274a\") " pod="kube-system/coredns-674b8bbfcf-gnqm2" Mar 12 04:09:01.995689 kubelet[2888]: I0312 04:09:01.995500 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c280d7f-b575-4f0d-a02d-c9776d9b274a-config-volume\") pod \"coredns-674b8bbfcf-gnqm2\" (UID: \"2c280d7f-b575-4f0d-a02d-c9776d9b274a\") " pod="kube-system/coredns-674b8bbfcf-gnqm2" Mar 12 04:09:02.096071 kubelet[2888]: I0312 04:09:02.096018 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn2mj\" (UniqueName: \"kubernetes.io/projected/9d7abd16-3cdd-4deb-b9d3-763143d211a7-kube-api-access-vn2mj\") pod \"coredns-674b8bbfcf-vtgfx\" (UID: \"9d7abd16-3cdd-4deb-b9d3-763143d211a7\") " pod="kube-system/coredns-674b8bbfcf-vtgfx" Mar 12 04:09:02.098150 kubelet[2888]: I0312 04:09:02.096105 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d7abd16-3cdd-4deb-b9d3-763143d211a7-config-volume\") pod \"coredns-674b8bbfcf-vtgfx\" (UID: \"9d7abd16-3cdd-4deb-b9d3-763143d211a7\") " pod="kube-system/coredns-674b8bbfcf-vtgfx" Mar 12 04:09:02.239475 containerd[1621]: time="2026-03-12T04:09:02.239158144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gnqm2,Uid:2c280d7f-b575-4f0d-a02d-c9776d9b274a,Namespace:kube-system,Attempt:0,}" Mar 12 04:09:02.250418 containerd[1621]: time="2026-03-12T04:09:02.250371052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vtgfx,Uid:9d7abd16-3cdd-4deb-b9d3-763143d211a7,Namespace:kube-system,Attempt:0,}" Mar 12 04:09:02.485486 systemd[1]: run-containerd-runc-k8s.io-caedbc29638c1d0d58a6287b8afd6ff6803f4c2323db9bf6becba6a19407b118-runc.PyCcQC.mount: Deactivated successfully. Mar 12 04:09:04.431460 systemd-networkd[1260]: cilium_host: Link UP Mar 12 04:09:04.433961 systemd-networkd[1260]: cilium_net: Link UP Mar 12 04:09:04.433973 systemd-networkd[1260]: cilium_net: Gained carrier Mar 12 04:09:04.434382 systemd-networkd[1260]: cilium_host: Gained carrier Mar 12 04:09:04.446055 systemd-networkd[1260]: cilium_host: Gained IPv6LL Mar 12 04:09:04.602691 systemd-networkd[1260]: cilium_vxlan: Link UP Mar 12 04:09:04.602703 systemd-networkd[1260]: cilium_vxlan: Gained carrier Mar 12 04:09:04.619030 systemd-networkd[1260]: cilium_net: Gained IPv6LL Mar 12 04:09:05.173803 kernel: NET: Registered PF_ALG protocol family Mar 12 04:09:06.308847 systemd-networkd[1260]: lxc_health: Link UP Mar 12 04:09:06.327797 systemd-networkd[1260]: lxc_health: Gained carrier Mar 12 04:09:06.498930 systemd-networkd[1260]: cilium_vxlan: Gained IPv6LL Mar 12 04:09:06.873592 kubelet[2888]: I0312 04:09:06.871010 2888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-p27cv" podStartSLOduration=10.528038968 podStartE2EDuration="20.870976922s" podCreationTimestamp="2026-03-12 04:08:46 +0000 UTC" firstStartedPulling="2026-03-12 04:08:46.998792637 +0000 UTC m=+6.115035935" lastFinishedPulling="2026-03-12 04:08:57.341730596 +0000 UTC m=+16.457973889" observedRunningTime="2026-03-12 04:09:02.477162349 +0000 UTC m=+21.593405682" watchObservedRunningTime="2026-03-12 04:09:06.870976922 +0000 UTC m=+25.987220227" Mar 12 04:09:06.948592 kernel: eth0: renamed from tmpa0165 Mar 12 04:09:06.953252 systemd-networkd[1260]: lxc8293d7c0bc6e: Link UP Mar 12 04:09:06.966781 systemd-networkd[1260]: lxc8293d7c0bc6e: Gained carrier Mar 12 04:09:07.087166 systemd-networkd[1260]: lxcfceeaa64a94a: Link UP Mar 12 04:09:07.109732 kernel: eth0: renamed from tmp2681a Mar 12 04:09:07.121123 systemd-networkd[1260]: lxcfceeaa64a94a: Gained carrier Mar 12 04:09:08.098887 systemd-networkd[1260]: lxc_health: Gained IPv6LL Mar 12 04:09:08.547761 systemd-networkd[1260]: lxc8293d7c0bc6e: Gained IPv6LL Mar 12 04:09:09.058749 systemd-networkd[1260]: lxcfceeaa64a94a: Gained IPv6LL Mar 12 04:09:12.797608 containerd[1621]: time="2026-03-12T04:09:12.788229503Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 04:09:12.797608 containerd[1621]: time="2026-03-12T04:09:12.788377703Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 04:09:12.797608 containerd[1621]: time="2026-03-12T04:09:12.788402248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 04:09:12.797608 containerd[1621]: time="2026-03-12T04:09:12.788693206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 04:09:12.835632 containerd[1621]: time="2026-03-12T04:09:12.826538141Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 04:09:12.835632 containerd[1621]: time="2026-03-12T04:09:12.826716273Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 04:09:12.835632 containerd[1621]: time="2026-03-12T04:09:12.826748345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 04:09:12.835632 containerd[1621]: time="2026-03-12T04:09:12.826938748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 04:09:12.976960 containerd[1621]: time="2026-03-12T04:09:12.976847759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gnqm2,Uid:2c280d7f-b575-4f0d-a02d-c9776d9b274a,Namespace:kube-system,Attempt:0,} returns sandbox id \"a01654a68b57b537deddcbf70de7a45b60808543684b8a7f69242be99b89dc01\"" Mar 12 04:09:12.994303 containerd[1621]: time="2026-03-12T04:09:12.994009520Z" level=info msg="CreateContainer within sandbox \"a01654a68b57b537deddcbf70de7a45b60808543684b8a7f69242be99b89dc01\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 12 04:09:13.004737 containerd[1621]: time="2026-03-12T04:09:13.004573671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vtgfx,Uid:9d7abd16-3cdd-4deb-b9d3-763143d211a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"2681a88d42f6fab3fcedf1651c3f4156107b45855d740b238e3c55076627b7a9\"" Mar 12 04:09:13.018947 containerd[1621]: time="2026-03-12T04:09:13.018863810Z" level=info msg="CreateContainer within sandbox \"2681a88d42f6fab3fcedf1651c3f4156107b45855d740b238e3c55076627b7a9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 12 04:09:13.045891 containerd[1621]: time="2026-03-12T04:09:13.045832560Z" level=info msg="CreateContainer within sandbox \"a01654a68b57b537deddcbf70de7a45b60808543684b8a7f69242be99b89dc01\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f31945fcb07f55f56b2315ad31fdf0858d3b184ff464bf1a6a744c51a800126c\"" Mar 12 04:09:13.048719 containerd[1621]: time="2026-03-12T04:09:13.048297850Z" level=info msg="StartContainer for \"f31945fcb07f55f56b2315ad31fdf0858d3b184ff464bf1a6a744c51a800126c\"" Mar 12 04:09:13.049624 containerd[1621]: time="2026-03-12T04:09:13.049398218Z" level=info msg="CreateContainer within sandbox \"2681a88d42f6fab3fcedf1651c3f4156107b45855d740b238e3c55076627b7a9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fcddfe43bcad0e2d1f76d281c3db1c5ca670acdf2ed9b946d2018fc017b42d4d\"" Mar 12 04:09:13.050833 containerd[1621]: time="2026-03-12T04:09:13.050782384Z" level=info msg="StartContainer for \"fcddfe43bcad0e2d1f76d281c3db1c5ca670acdf2ed9b946d2018fc017b42d4d\"" Mar 12 04:09:13.172630 containerd[1621]: time="2026-03-12T04:09:13.172491341Z" level=info msg="StartContainer for \"fcddfe43bcad0e2d1f76d281c3db1c5ca670acdf2ed9b946d2018fc017b42d4d\" returns successfully" Mar 12 04:09:13.199334 containerd[1621]: time="2026-03-12T04:09:13.199154357Z" level=info msg="StartContainer for \"f31945fcb07f55f56b2315ad31fdf0858d3b184ff464bf1a6a744c51a800126c\" returns successfully" Mar 12 04:09:13.498457 kubelet[2888]: I0312 04:09:13.498187 2888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-vtgfx" podStartSLOduration=26.498087335 podStartE2EDuration="26.498087335s" podCreationTimestamp="2026-03-12 04:08:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 04:09:13.491646818 +0000 UTC m=+32.607890128" watchObservedRunningTime="2026-03-12 04:09:13.498087335 +0000 UTC m=+32.614330645" Mar 12 04:09:13.797829 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2640447293.mount: Deactivated successfully. Mar 12 04:09:46.272800 systemd[1]: Started sshd@9-10.244.26.218:22-20.161.92.111:44498.service - OpenSSH per-connection server daemon (20.161.92.111:44498). Mar 12 04:09:46.877512 sshd[4265]: Accepted publickey for core from 20.161.92.111 port 44498 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 04:09:46.878994 sshd[4265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 04:09:46.896812 systemd-logind[1599]: New session 12 of user core. Mar 12 04:09:46.901089 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 12 04:09:47.863165 sshd[4265]: pam_unix(sshd:session): session closed for user core Mar 12 04:09:47.869293 systemd[1]: sshd@9-10.244.26.218:22-20.161.92.111:44498.service: Deactivated successfully. Mar 12 04:09:47.873032 systemd[1]: session-12.scope: Deactivated successfully. Mar 12 04:09:47.873225 systemd-logind[1599]: Session 12 logged out. Waiting for processes to exit. Mar 12 04:09:47.875777 systemd-logind[1599]: Removed session 12. Mar 12 04:09:52.964606 systemd[1]: Started sshd@10-10.244.26.218:22-20.161.92.111:60166.service - OpenSSH per-connection server daemon (20.161.92.111:60166). Mar 12 04:09:53.532741 sshd[4282]: Accepted publickey for core from 20.161.92.111 port 60166 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 04:09:53.536464 sshd[4282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 04:09:53.545077 systemd-logind[1599]: New session 13 of user core. Mar 12 04:09:53.553596 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 12 04:09:54.050033 sshd[4282]: pam_unix(sshd:session): session closed for user core Mar 12 04:09:54.055911 systemd[1]: sshd@10-10.244.26.218:22-20.161.92.111:60166.service: Deactivated successfully. Mar 12 04:09:54.061777 systemd-logind[1599]: Session 13 logged out. Waiting for processes to exit. Mar 12 04:09:54.062851 systemd[1]: session-13.scope: Deactivated successfully. Mar 12 04:09:54.065367 systemd-logind[1599]: Removed session 13. Mar 12 04:09:57.712696 update_engine[1609]: I20260312 04:09:57.712116 1609 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 12 04:09:57.712696 update_engine[1609]: I20260312 04:09:57.712517 1609 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 12 04:09:57.718138 update_engine[1609]: I20260312 04:09:57.716302 1609 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 12 04:09:57.718138 update_engine[1609]: I20260312 04:09:57.717121 1609 omaha_request_params.cc:62] Current group set to lts Mar 12 04:09:57.718138 update_engine[1609]: I20260312 04:09:57.717365 1609 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 12 04:09:57.718138 update_engine[1609]: I20260312 04:09:57.717386 1609 update_attempter.cc:643] Scheduling an action processor start. Mar 12 04:09:57.718138 update_engine[1609]: I20260312 04:09:57.717424 1609 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 12 04:09:57.718138 update_engine[1609]: I20260312 04:09:57.717506 1609 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 12 04:09:57.718138 update_engine[1609]: I20260312 04:09:57.717630 1609 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 12 04:09:57.718138 update_engine[1609]: I20260312 04:09:57.717653 1609 omaha_request_action.cc:272] Request: Mar 12 04:09:57.718138 update_engine[1609]: Mar 12 04:09:57.718138 update_engine[1609]: Mar 12 04:09:57.718138 update_engine[1609]: Mar 12 04:09:57.718138 update_engine[1609]: Mar 12 04:09:57.718138 update_engine[1609]: Mar 12 04:09:57.718138 update_engine[1609]: Mar 12 04:09:57.718138 update_engine[1609]: Mar 12 04:09:57.718138 update_engine[1609]: Mar 12 04:09:57.718138 update_engine[1609]: I20260312 04:09:57.717664 1609 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 12 04:09:57.727646 update_engine[1609]: I20260312 04:09:57.727468 1609 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 12 04:09:57.731287 update_engine[1609]: I20260312 04:09:57.729997 1609 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 12 04:09:57.738596 update_engine[1609]: E20260312 04:09:57.737697 1609 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 12 04:09:57.738596 update_engine[1609]: I20260312 04:09:57.737814 1609 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 12 04:09:57.741289 locksmithd[1635]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 12 04:09:59.147948 systemd[1]: Started sshd@11-10.244.26.218:22-20.161.92.111:60176.service - OpenSSH per-connection server daemon (20.161.92.111:60176). Mar 12 04:09:59.771012 sshd[4297]: Accepted publickey for core from 20.161.92.111 port 60176 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 04:09:59.772605 sshd[4297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 04:09:59.779726 systemd-logind[1599]: New session 14 of user core. Mar 12 04:09:59.782977 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 12 04:10:00.273947 sshd[4297]: pam_unix(sshd:session): session closed for user core Mar 12 04:10:00.280024 systemd[1]: sshd@11-10.244.26.218:22-20.161.92.111:60176.service: Deactivated successfully. Mar 12 04:10:00.283340 systemd-logind[1599]: Session 14 logged out. Waiting for processes to exit. Mar 12 04:10:00.283981 systemd[1]: session-14.scope: Deactivated successfully. Mar 12 04:10:00.286806 systemd-logind[1599]: Removed session 14. Mar 12 04:10:05.371005 systemd[1]: Started sshd@12-10.244.26.218:22-20.161.92.111:53720.service - OpenSSH per-connection server daemon (20.161.92.111:53720). Mar 12 04:10:05.974368 sshd[4311]: Accepted publickey for core from 20.161.92.111 port 53720 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 04:10:05.978142 sshd[4311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 04:10:05.985701 systemd-logind[1599]: New session 15 of user core. Mar 12 04:10:05.996983 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 12 04:10:06.524349 sshd[4311]: pam_unix(sshd:session): session closed for user core Mar 12 04:10:06.533819 systemd[1]: sshd@12-10.244.26.218:22-20.161.92.111:53720.service: Deactivated successfully. Mar 12 04:10:06.540431 systemd[1]: session-15.scope: Deactivated successfully. Mar 12 04:10:06.543442 systemd-logind[1599]: Session 15 logged out. Waiting for processes to exit. Mar 12 04:10:06.545692 systemd-logind[1599]: Removed session 15. Mar 12 04:10:06.618050 systemd[1]: Started sshd@13-10.244.26.218:22-20.161.92.111:53732.service - OpenSSH per-connection server daemon (20.161.92.111:53732). Mar 12 04:10:07.191837 sshd[4325]: Accepted publickey for core from 20.161.92.111 port 53732 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 04:10:07.195001 sshd[4325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 04:10:07.202280 systemd-logind[1599]: New session 16 of user core. Mar 12 04:10:07.208027 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 12 04:10:07.667852 update_engine[1609]: I20260312 04:10:07.667279 1609 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 12 04:10:07.667852 update_engine[1609]: I20260312 04:10:07.667838 1609 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 12 04:10:07.669113 update_engine[1609]: I20260312 04:10:07.668219 1609 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 12 04:10:07.669216 update_engine[1609]: E20260312 04:10:07.669178 1609 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 12 04:10:07.669316 update_engine[1609]: I20260312 04:10:07.669284 1609 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 12 04:10:07.785813 sshd[4325]: pam_unix(sshd:session): session closed for user core Mar 12 04:10:07.793158 systemd[1]: sshd@13-10.244.26.218:22-20.161.92.111:53732.service: Deactivated successfully. Mar 12 04:10:07.798070 systemd[1]: session-16.scope: Deactivated successfully. Mar 12 04:10:07.798596 systemd-logind[1599]: Session 16 logged out. Waiting for processes to exit. Mar 12 04:10:07.801234 systemd-logind[1599]: Removed session 16. Mar 12 04:10:07.881994 systemd[1]: Started sshd@14-10.244.26.218:22-20.161.92.111:53738.service - OpenSSH per-connection server daemon (20.161.92.111:53738). Mar 12 04:10:08.452233 sshd[4337]: Accepted publickey for core from 20.161.92.111 port 53738 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 04:10:08.454862 sshd[4337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 04:10:08.462986 systemd-logind[1599]: New session 17 of user core. Mar 12 04:10:08.468981 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 12 04:10:08.952456 sshd[4337]: pam_unix(sshd:session): session closed for user core Mar 12 04:10:08.960220 systemd[1]: sshd@14-10.244.26.218:22-20.161.92.111:53738.service: Deactivated successfully. Mar 12 04:10:08.966215 systemd-logind[1599]: Session 17 logged out. Waiting for processes to exit. Mar 12 04:10:08.967466 systemd[1]: session-17.scope: Deactivated successfully. Mar 12 04:10:08.970209 systemd-logind[1599]: Removed session 17. Mar 12 04:10:14.048942 systemd[1]: Started sshd@15-10.244.26.218:22-20.161.92.111:56480.service - OpenSSH per-connection server daemon (20.161.92.111:56480). Mar 12 04:10:14.620065 sshd[4351]: Accepted publickey for core from 20.161.92.111 port 56480 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 04:10:14.623047 sshd[4351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 04:10:14.631699 systemd-logind[1599]: New session 18 of user core. Mar 12 04:10:14.645148 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 12 04:10:15.125430 sshd[4351]: pam_unix(sshd:session): session closed for user core Mar 12 04:10:15.134252 systemd[1]: sshd@15-10.244.26.218:22-20.161.92.111:56480.service: Deactivated successfully. Mar 12 04:10:15.139185 systemd[1]: session-18.scope: Deactivated successfully. Mar 12 04:10:15.140111 systemd-logind[1599]: Session 18 logged out. Waiting for processes to exit. Mar 12 04:10:15.142802 systemd-logind[1599]: Removed session 18. Mar 12 04:10:17.670122 update_engine[1609]: I20260312 04:10:17.670007 1609 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 12 04:10:17.670787 update_engine[1609]: I20260312 04:10:17.670431 1609 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 12 04:10:17.670787 update_engine[1609]: I20260312 04:10:17.670758 1609 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 12 04:10:17.671497 update_engine[1609]: E20260312 04:10:17.671452 1609 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 12 04:10:17.671584 update_engine[1609]: I20260312 04:10:17.671528 1609 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 12 04:10:20.221373 systemd[1]: Started sshd@16-10.244.26.218:22-20.161.92.111:53586.service - OpenSSH per-connection server daemon (20.161.92.111:53586). Mar 12 04:10:20.789623 sshd[4366]: Accepted publickey for core from 20.161.92.111 port 53586 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 04:10:20.791311 sshd[4366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 04:10:20.806386 systemd-logind[1599]: New session 19 of user core. Mar 12 04:10:20.809174 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 12 04:10:21.290345 sshd[4366]: pam_unix(sshd:session): session closed for user core Mar 12 04:10:21.299479 systemd[1]: sshd@16-10.244.26.218:22-20.161.92.111:53586.service: Deactivated successfully. Mar 12 04:10:21.310532 systemd[1]: session-19.scope: Deactivated successfully. Mar 12 04:10:21.312996 systemd-logind[1599]: Session 19 logged out. Waiting for processes to exit. Mar 12 04:10:21.316213 systemd-logind[1599]: Removed session 19. Mar 12 04:10:21.386284 systemd[1]: Started sshd@17-10.244.26.218:22-20.161.92.111:53588.service - OpenSSH per-connection server daemon (20.161.92.111:53588). Mar 12 04:10:21.962116 sshd[4379]: Accepted publickey for core from 20.161.92.111 port 53588 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 04:10:21.965310 sshd[4379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 04:10:21.973492 systemd-logind[1599]: New session 20 of user core. Mar 12 04:10:21.979055 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 12 04:10:22.871466 sshd[4379]: pam_unix(sshd:session): session closed for user core Mar 12 04:10:22.878105 systemd[1]: sshd@17-10.244.26.218:22-20.161.92.111:53588.service: Deactivated successfully. Mar 12 04:10:22.883102 systemd-logind[1599]: Session 20 logged out. Waiting for processes to exit. Mar 12 04:10:22.884078 systemd[1]: session-20.scope: Deactivated successfully. Mar 12 04:10:22.886202 systemd-logind[1599]: Removed session 20. Mar 12 04:10:22.967077 systemd[1]: Started sshd@18-10.244.26.218:22-20.161.92.111:53592.service - OpenSSH per-connection server daemon (20.161.92.111:53592). Mar 12 04:10:23.544038 sshd[4391]: Accepted publickey for core from 20.161.92.111 port 53592 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 04:10:23.544921 sshd[4391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 04:10:23.552495 systemd-logind[1599]: New session 21 of user core. Mar 12 04:10:23.558976 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 12 04:10:24.751086 sshd[4391]: pam_unix(sshd:session): session closed for user core Mar 12 04:10:24.755162 systemd[1]: sshd@18-10.244.26.218:22-20.161.92.111:53592.service: Deactivated successfully. Mar 12 04:10:24.761069 systemd-logind[1599]: Session 21 logged out. Waiting for processes to exit. Mar 12 04:10:24.761505 systemd[1]: session-21.scope: Deactivated successfully. Mar 12 04:10:24.764127 systemd-logind[1599]: Removed session 21. Mar 12 04:10:24.848283 systemd[1]: Started sshd@19-10.244.26.218:22-20.161.92.111:53594.service - OpenSSH per-connection server daemon (20.161.92.111:53594). Mar 12 04:10:25.401610 sshd[4411]: Accepted publickey for core from 20.161.92.111 port 53594 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 04:10:25.403666 sshd[4411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 04:10:25.410554 systemd-logind[1599]: New session 22 of user core. Mar 12 04:10:25.418021 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 12 04:10:26.069758 sshd[4411]: pam_unix(sshd:session): session closed for user core Mar 12 04:10:26.073361 systemd[1]: sshd@19-10.244.26.218:22-20.161.92.111:53594.service: Deactivated successfully. Mar 12 04:10:26.079018 systemd-logind[1599]: Session 22 logged out. Waiting for processes to exit. Mar 12 04:10:26.079930 systemd[1]: session-22.scope: Deactivated successfully. Mar 12 04:10:26.082404 systemd-logind[1599]: Removed session 22. Mar 12 04:10:26.165937 systemd[1]: Started sshd@20-10.244.26.218:22-20.161.92.111:53606.service - OpenSSH per-connection server daemon (20.161.92.111:53606). Mar 12 04:10:26.718601 sshd[4423]: Accepted publickey for core from 20.161.92.111 port 53606 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 04:10:26.729081 sshd[4423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 04:10:26.739172 systemd-logind[1599]: New session 23 of user core. Mar 12 04:10:26.745106 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 12 04:10:27.198917 sshd[4423]: pam_unix(sshd:session): session closed for user core Mar 12 04:10:27.204944 systemd[1]: sshd@20-10.244.26.218:22-20.161.92.111:53606.service: Deactivated successfully. Mar 12 04:10:27.209469 systemd[1]: session-23.scope: Deactivated successfully. Mar 12 04:10:27.212694 systemd-logind[1599]: Session 23 logged out. Waiting for processes to exit. Mar 12 04:10:27.214396 systemd-logind[1599]: Removed session 23. Mar 12 04:10:27.667624 update_engine[1609]: I20260312 04:10:27.667418 1609 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 12 04:10:27.668265 update_engine[1609]: I20260312 04:10:27.668000 1609 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 12 04:10:27.668435 update_engine[1609]: I20260312 04:10:27.668298 1609 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 12 04:10:27.669081 update_engine[1609]: E20260312 04:10:27.668980 1609 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 12 04:10:27.669081 update_engine[1609]: I20260312 04:10:27.669061 1609 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 12 04:10:27.669298 update_engine[1609]: I20260312 04:10:27.669081 1609 omaha_request_action.cc:617] Omaha request response: Mar 12 04:10:27.669298 update_engine[1609]: E20260312 04:10:27.669200 1609 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 12 04:10:27.676611 update_engine[1609]: I20260312 04:10:27.675683 1609 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 12 04:10:27.676611 update_engine[1609]: I20260312 04:10:27.675732 1609 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 12 04:10:27.676611 update_engine[1609]: I20260312 04:10:27.675747 1609 update_attempter.cc:306] Processing Done. Mar 12 04:10:27.676611 update_engine[1609]: E20260312 04:10:27.675781 1609 update_attempter.cc:619] Update failed. Mar 12 04:10:27.676611 update_engine[1609]: I20260312 04:10:27.675814 1609 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 12 04:10:27.676611 update_engine[1609]: I20260312 04:10:27.675829 1609 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 12 04:10:27.676611 update_engine[1609]: I20260312 04:10:27.675843 1609 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 12 04:10:27.676611 update_engine[1609]: I20260312 04:10:27.675945 1609 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 12 04:10:27.676611 update_engine[1609]: I20260312 04:10:27.675988 1609 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 12 04:10:27.676611 update_engine[1609]: I20260312 04:10:27.676003 1609 omaha_request_action.cc:272] Request: Mar 12 04:10:27.676611 update_engine[1609]: Mar 12 04:10:27.676611 update_engine[1609]: Mar 12 04:10:27.676611 update_engine[1609]: Mar 12 04:10:27.676611 update_engine[1609]: Mar 12 04:10:27.676611 update_engine[1609]: Mar 12 04:10:27.676611 update_engine[1609]: Mar 12 04:10:27.676611 update_engine[1609]: I20260312 04:10:27.676016 1609 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 12 04:10:27.676611 update_engine[1609]: I20260312 04:10:27.676335 1609 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 12 04:10:27.678214 update_engine[1609]: I20260312 04:10:27.677686 1609 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 12 04:10:27.678585 update_engine[1609]: E20260312 04:10:27.678444 1609 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 12 04:10:27.678585 update_engine[1609]: I20260312 04:10:27.678512 1609 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 12 04:10:27.678585 update_engine[1609]: I20260312 04:10:27.678531 1609 omaha_request_action.cc:617] Omaha request response: Mar 12 04:10:27.678585 update_engine[1609]: I20260312 04:10:27.678545 1609 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 12 04:10:27.679241 update_engine[1609]: I20260312 04:10:27.678694 1609 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 12 04:10:27.679241 update_engine[1609]: I20260312 04:10:27.678716 1609 update_attempter.cc:306] Processing Done. Mar 12 04:10:27.679241 update_engine[1609]: I20260312 04:10:27.678730 1609 update_attempter.cc:310] Error event sent. Mar 12 04:10:27.679241 update_engine[1609]: I20260312 04:10:27.678802 1609 update_check_scheduler.cc:74] Next update check in 41m28s Mar 12 04:10:27.680135 locksmithd[1635]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 12 04:10:27.681205 locksmithd[1635]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 12 04:10:32.293908 systemd[1]: Started sshd@21-10.244.26.218:22-20.161.92.111:45612.service - OpenSSH per-connection server daemon (20.161.92.111:45612). Mar 12 04:10:32.860494 sshd[4437]: Accepted publickey for core from 20.161.92.111 port 45612 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 04:10:32.862616 sshd[4437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 04:10:32.869537 systemd-logind[1599]: New session 24 of user core. Mar 12 04:10:32.876153 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 12 04:10:33.350553 sshd[4437]: pam_unix(sshd:session): session closed for user core Mar 12 04:10:33.356004 systemd[1]: sshd@21-10.244.26.218:22-20.161.92.111:45612.service: Deactivated successfully. Mar 12 04:10:33.360424 systemd-logind[1599]: Session 24 logged out. Waiting for processes to exit. Mar 12 04:10:33.360856 systemd[1]: session-24.scope: Deactivated successfully. Mar 12 04:10:33.363536 systemd-logind[1599]: Removed session 24. Mar 12 04:10:38.448081 systemd[1]: Started sshd@22-10.244.26.218:22-20.161.92.111:45616.service - OpenSSH per-connection server daemon (20.161.92.111:45616). Mar 12 04:10:39.036305 sshd[4452]: Accepted publickey for core from 20.161.92.111 port 45616 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 04:10:39.039185 sshd[4452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 04:10:39.046655 systemd-logind[1599]: New session 25 of user core. Mar 12 04:10:39.057815 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 12 04:10:39.520835 sshd[4452]: pam_unix(sshd:session): session closed for user core Mar 12 04:10:39.525697 systemd[1]: sshd@22-10.244.26.218:22-20.161.92.111:45616.service: Deactivated successfully. Mar 12 04:10:39.530951 systemd-logind[1599]: Session 25 logged out. Waiting for processes to exit. Mar 12 04:10:39.532847 systemd[1]: session-25.scope: Deactivated successfully. Mar 12 04:10:39.535871 systemd-logind[1599]: Removed session 25. Mar 12 04:10:44.615866 systemd[1]: Started sshd@23-10.244.26.218:22-20.161.92.111:40124.service - OpenSSH per-connection server daemon (20.161.92.111:40124). Mar 12 04:10:45.191866 sshd[4467]: Accepted publickey for core from 20.161.92.111 port 40124 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 04:10:45.193813 sshd[4467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 04:10:45.201826 systemd-logind[1599]: New session 26 of user core. Mar 12 04:10:45.206233 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 12 04:10:45.684804 sshd[4467]: pam_unix(sshd:session): session closed for user core Mar 12 04:10:45.689129 systemd-logind[1599]: Session 26 logged out. Waiting for processes to exit. Mar 12 04:10:45.689641 systemd[1]: sshd@23-10.244.26.218:22-20.161.92.111:40124.service: Deactivated successfully. Mar 12 04:10:45.694480 systemd[1]: session-26.scope: Deactivated successfully. Mar 12 04:10:45.696971 systemd-logind[1599]: Removed session 26. Mar 12 04:10:45.780998 systemd[1]: Started sshd@24-10.244.26.218:22-20.161.92.111:40134.service - OpenSSH per-connection server daemon (20.161.92.111:40134). Mar 12 04:10:46.336603 sshd[4481]: Accepted publickey for core from 20.161.92.111 port 40134 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 04:10:46.338381 sshd[4481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 04:10:46.345631 systemd-logind[1599]: New session 27 of user core. Mar 12 04:10:46.354168 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 12 04:10:48.874417 kubelet[2888]: I0312 04:10:48.874323 2888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-gnqm2" podStartSLOduration=121.874298284 podStartE2EDuration="2m1.874298284s" podCreationTimestamp="2026-03-12 04:08:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 04:09:13.543168587 +0000 UTC m=+32.659411904" watchObservedRunningTime="2026-03-12 04:10:48.874298284 +0000 UTC m=+127.990541603" Mar 12 04:10:48.896618 containerd[1621]: time="2026-03-12T04:10:48.896285394Z" level=info msg="StopContainer for \"1c243fed214260e1730542f68b61666a825921c924e68b8e4aa9decfa5498c76\" with timeout 30 (s)" Mar 12 04:10:48.904147 containerd[1621]: time="2026-03-12T04:10:48.902962261Z" level=info msg="Stop container \"1c243fed214260e1730542f68b61666a825921c924e68b8e4aa9decfa5498c76\" with signal terminated" Mar 12 04:10:48.944755 systemd[1]: run-containerd-runc-k8s.io-caedbc29638c1d0d58a6287b8afd6ff6803f4c2323db9bf6becba6a19407b118-runc.EtanKH.mount: Deactivated successfully. Mar 12 04:10:48.965488 containerd[1621]: time="2026-03-12T04:10:48.965407365Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 12 04:10:48.976822 containerd[1621]: time="2026-03-12T04:10:48.976652875Z" level=info msg="StopContainer for \"caedbc29638c1d0d58a6287b8afd6ff6803f4c2323db9bf6becba6a19407b118\" with timeout 2 (s)" Mar 12 04:10:48.977043 containerd[1621]: time="2026-03-12T04:10:48.977021868Z" level=info msg="Stop container \"caedbc29638c1d0d58a6287b8afd6ff6803f4c2323db9bf6becba6a19407b118\" with signal terminated" Mar 12 04:10:48.987438 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c243fed214260e1730542f68b61666a825921c924e68b8e4aa9decfa5498c76-rootfs.mount: Deactivated successfully. Mar 12 04:10:48.994167 systemd-networkd[1260]: lxc_health: Link DOWN Mar 12 04:10:48.994183 systemd-networkd[1260]: lxc_health: Lost carrier Mar 12 04:10:49.001262 containerd[1621]: time="2026-03-12T04:10:49.000834783Z" level=info msg="shim disconnected" id=1c243fed214260e1730542f68b61666a825921c924e68b8e4aa9decfa5498c76 namespace=k8s.io Mar 12 04:10:49.001887 containerd[1621]: time="2026-03-12T04:10:49.001545898Z" level=warning msg="cleaning up after shim disconnected" id=1c243fed214260e1730542f68b61666a825921c924e68b8e4aa9decfa5498c76 namespace=k8s.io Mar 12 04:10:49.001887 containerd[1621]: time="2026-03-12T04:10:49.001742057Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 04:10:49.052472 containerd[1621]: time="2026-03-12T04:10:49.052405202Z" level=info msg="StopContainer for \"1c243fed214260e1730542f68b61666a825921c924e68b8e4aa9decfa5498c76\" returns successfully" Mar 12 04:10:49.056730 containerd[1621]: time="2026-03-12T04:10:49.056335302Z" level=info msg="StopPodSandbox for \"b4442caf03f5634a5f20ecaa6b4c61e7572261d3e6ffde324a9a0912314a5d2e\"" Mar 12 04:10:49.056730 containerd[1621]: time="2026-03-12T04:10:49.056404517Z" level=info msg="Container to stop \"1c243fed214260e1730542f68b61666a825921c924e68b8e4aa9decfa5498c76\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 12 04:10:49.061149 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b4442caf03f5634a5f20ecaa6b4c61e7572261d3e6ffde324a9a0912314a5d2e-shm.mount: Deactivated successfully. Mar 12 04:10:49.069315 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-caedbc29638c1d0d58a6287b8afd6ff6803f4c2323db9bf6becba6a19407b118-rootfs.mount: Deactivated successfully. Mar 12 04:10:49.081598 containerd[1621]: time="2026-03-12T04:10:49.080632920Z" level=info msg="shim disconnected" id=caedbc29638c1d0d58a6287b8afd6ff6803f4c2323db9bf6becba6a19407b118 namespace=k8s.io Mar 12 04:10:49.081598 containerd[1621]: time="2026-03-12T04:10:49.080752664Z" level=warning msg="cleaning up after shim disconnected" id=caedbc29638c1d0d58a6287b8afd6ff6803f4c2323db9bf6becba6a19407b118 namespace=k8s.io Mar 12 04:10:49.081598 containerd[1621]: time="2026-03-12T04:10:49.080768282Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 04:10:49.109399 containerd[1621]: time="2026-03-12T04:10:49.109317314Z" level=warning msg="cleanup warnings time=\"2026-03-12T04:10:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 12 04:10:49.116119 containerd[1621]: time="2026-03-12T04:10:49.115961021Z" level=info msg="StopContainer for \"caedbc29638c1d0d58a6287b8afd6ff6803f4c2323db9bf6becba6a19407b118\" returns successfully" Mar 12 04:10:49.117295 containerd[1621]: time="2026-03-12T04:10:49.116934366Z" level=info msg="StopPodSandbox for \"304b179e1afdadacc42fef490fbc35d224fd7b3f9e9dfd4e6367df67bca3688f\"" Mar 12 04:10:49.117295 containerd[1621]: time="2026-03-12T04:10:49.116999987Z" level=info msg="Container to stop \"6774863910a881baee677143bca98697f46e05d5ebcb08e9eaa3f519c23b6945\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 12 04:10:49.117295 containerd[1621]: time="2026-03-12T04:10:49.117021372Z" level=info msg="Container to stop \"302e5b6e4e7249aa3dfd5279927fe0b9aed27edeb7fc072526334d07465e4336\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 12 04:10:49.117295 containerd[1621]: time="2026-03-12T04:10:49.117037283Z" level=info msg="Container to stop \"aab4c0e1ca4509cf182bec9afa8c1d0770c2ff44341bf8371cd11e0022c4a13c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 12 04:10:49.117295 containerd[1621]: time="2026-03-12T04:10:49.117053871Z" level=info msg="Container to stop \"c6ffae743a3a99223b57d115821300040b4effcbe93aec4f1877fd7d7cae17d5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 12 04:10:49.117295 containerd[1621]: time="2026-03-12T04:10:49.117068649Z" level=info msg="Container to stop \"caedbc29638c1d0d58a6287b8afd6ff6803f4c2323db9bf6becba6a19407b118\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 12 04:10:49.143006 containerd[1621]: time="2026-03-12T04:10:49.142849714Z" level=info msg="shim disconnected" id=b4442caf03f5634a5f20ecaa6b4c61e7572261d3e6ffde324a9a0912314a5d2e namespace=k8s.io Mar 12 04:10:49.143468 containerd[1621]: time="2026-03-12T04:10:49.143132060Z" level=warning msg="cleaning up after shim disconnected" id=b4442caf03f5634a5f20ecaa6b4c61e7572261d3e6ffde324a9a0912314a5d2e namespace=k8s.io Mar 12 04:10:49.143468 containerd[1621]: time="2026-03-12T04:10:49.143154383Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 04:10:49.169757 containerd[1621]: time="2026-03-12T04:10:49.165750987Z" level=warning msg="cleanup warnings time=\"2026-03-12T04:10:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 12 04:10:49.169757 containerd[1621]: time="2026-03-12T04:10:49.167252449Z" level=info msg="TearDown network for sandbox \"b4442caf03f5634a5f20ecaa6b4c61e7572261d3e6ffde324a9a0912314a5d2e\" successfully" Mar 12 04:10:49.169757 containerd[1621]: time="2026-03-12T04:10:49.167278550Z" level=info msg="StopPodSandbox for \"b4442caf03f5634a5f20ecaa6b4c61e7572261d3e6ffde324a9a0912314a5d2e\" returns successfully" Mar 12 04:10:49.182795 containerd[1621]: time="2026-03-12T04:10:49.182608450Z" level=info msg="shim disconnected" id=304b179e1afdadacc42fef490fbc35d224fd7b3f9e9dfd4e6367df67bca3688f namespace=k8s.io Mar 12 04:10:49.182795 containerd[1621]: time="2026-03-12T04:10:49.182791525Z" level=warning msg="cleaning up after shim disconnected" id=304b179e1afdadacc42fef490fbc35d224fd7b3f9e9dfd4e6367df67bca3688f namespace=k8s.io Mar 12 04:10:49.183343 containerd[1621]: time="2026-03-12T04:10:49.182809597Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 04:10:49.212152 kubelet[2888]: I0312 04:10:49.211221 2888 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e3f0be40-3113-494c-a695-6ee69107bb34-cilium-config-path\") pod \"e3f0be40-3113-494c-a695-6ee69107bb34\" (UID: \"e3f0be40-3113-494c-a695-6ee69107bb34\") " Mar 12 04:10:49.212152 kubelet[2888]: I0312 04:10:49.211312 2888 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9tfw\" (UniqueName: \"kubernetes.io/projected/e3f0be40-3113-494c-a695-6ee69107bb34-kube-api-access-j9tfw\") pod \"e3f0be40-3113-494c-a695-6ee69107bb34\" (UID: \"e3f0be40-3113-494c-a695-6ee69107bb34\") " Mar 12 04:10:49.218888 containerd[1621]: time="2026-03-12T04:10:49.218401343Z" level=info msg="TearDown network for sandbox \"304b179e1afdadacc42fef490fbc35d224fd7b3f9e9dfd4e6367df67bca3688f\" successfully" Mar 12 04:10:49.218888 containerd[1621]: time="2026-03-12T04:10:49.218460063Z" level=info msg="StopPodSandbox for \"304b179e1afdadacc42fef490fbc35d224fd7b3f9e9dfd4e6367df67bca3688f\" returns successfully" Mar 12 04:10:49.225030 kubelet[2888]: I0312 04:10:49.224176 2888 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3f0be40-3113-494c-a695-6ee69107bb34-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e3f0be40-3113-494c-a695-6ee69107bb34" (UID: "e3f0be40-3113-494c-a695-6ee69107bb34"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 12 04:10:49.226073 kubelet[2888]: I0312 04:10:49.223831 2888 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3f0be40-3113-494c-a695-6ee69107bb34-kube-api-access-j9tfw" (OuterVolumeSpecName: "kube-api-access-j9tfw") pod "e3f0be40-3113-494c-a695-6ee69107bb34" (UID: "e3f0be40-3113-494c-a695-6ee69107bb34"). InnerVolumeSpecName "kube-api-access-j9tfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 12 04:10:49.312365 kubelet[2888]: I0312 04:10:49.312102 2888 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-cilium-run\") pod \"2091e2aa-66d7-4a1d-806f-d6cc78c18cc4\" (UID: \"2091e2aa-66d7-4a1d-806f-d6cc78c18cc4\") " Mar 12 04:10:49.312365 kubelet[2888]: I0312 04:10:49.312228 2888 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2091e2aa-66d7-4a1d-806f-d6cc78c18cc4" (UID: "2091e2aa-66d7-4a1d-806f-d6cc78c18cc4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 04:10:49.312668 kubelet[2888]: I0312 04:10:49.312420 2888 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-host-proc-sys-kernel\") pod \"2091e2aa-66d7-4a1d-806f-d6cc78c18cc4\" (UID: \"2091e2aa-66d7-4a1d-806f-d6cc78c18cc4\") " Mar 12 04:10:49.312668 kubelet[2888]: I0312 04:10:49.312487 2888 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2091e2aa-66d7-4a1d-806f-d6cc78c18cc4" (UID: "2091e2aa-66d7-4a1d-806f-d6cc78c18cc4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 04:10:49.312668 kubelet[2888]: I0312 04:10:49.312540 2888 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-etc-cni-netd\") pod \"2091e2aa-66d7-4a1d-806f-d6cc78c18cc4\" (UID: \"2091e2aa-66d7-4a1d-806f-d6cc78c18cc4\") " Mar 12 04:10:49.312668 kubelet[2888]: I0312 04:10:49.312603 2888 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2091e2aa-66d7-4a1d-806f-d6cc78c18cc4" (UID: "2091e2aa-66d7-4a1d-806f-d6cc78c18cc4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 04:10:49.312881 kubelet[2888]: I0312 04:10:49.312690 2888 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-xtables-lock\") pod \"2091e2aa-66d7-4a1d-806f-d6cc78c18cc4\" (UID: \"2091e2aa-66d7-4a1d-806f-d6cc78c18cc4\") " Mar 12 04:10:49.312881 kubelet[2888]: I0312 04:10:49.312757 2888 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2091e2aa-66d7-4a1d-806f-d6cc78c18cc4" (UID: "2091e2aa-66d7-4a1d-806f-d6cc78c18cc4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 04:10:49.312881 kubelet[2888]: I0312 04:10:49.312793 2888 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9llj\" (UniqueName: \"kubernetes.io/projected/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-kube-api-access-d9llj\") pod \"2091e2aa-66d7-4a1d-806f-d6cc78c18cc4\" (UID: \"2091e2aa-66d7-4a1d-806f-d6cc78c18cc4\") " Mar 12 04:10:49.313349 kubelet[2888]: I0312 04:10:49.312826 2888 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-clustermesh-secrets\") pod \"2091e2aa-66d7-4a1d-806f-d6cc78c18cc4\" (UID: \"2091e2aa-66d7-4a1d-806f-d6cc78c18cc4\") " Mar 12 04:10:49.313349 kubelet[2888]: I0312 04:10:49.313341 2888 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-hostproc\") pod \"2091e2aa-66d7-4a1d-806f-d6cc78c18cc4\" (UID: \"2091e2aa-66d7-4a1d-806f-d6cc78c18cc4\") " Mar 12 04:10:49.313493 kubelet[2888]: I0312 04:10:49.313369 2888 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-host-proc-sys-net\") pod \"2091e2aa-66d7-4a1d-806f-d6cc78c18cc4\" (UID: \"2091e2aa-66d7-4a1d-806f-d6cc78c18cc4\") " Mar 12 04:10:49.313493 kubelet[2888]: I0312 04:10:49.313395 2888 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-cilium-cgroup\") pod \"2091e2aa-66d7-4a1d-806f-d6cc78c18cc4\" (UID: \"2091e2aa-66d7-4a1d-806f-d6cc78c18cc4\") " Mar 12 04:10:49.313493 kubelet[2888]: I0312 04:10:49.313423 2888 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-cni-path\") pod \"2091e2aa-66d7-4a1d-806f-d6cc78c18cc4\" (UID: \"2091e2aa-66d7-4a1d-806f-d6cc78c18cc4\") " Mar 12 04:10:49.313493 kubelet[2888]: I0312 04:10:49.313445 2888 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-bpf-maps\") pod \"2091e2aa-66d7-4a1d-806f-d6cc78c18cc4\" (UID: \"2091e2aa-66d7-4a1d-806f-d6cc78c18cc4\") " Mar 12 04:10:49.313722 kubelet[2888]: I0312 04:10:49.313497 2888 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-cilium-config-path\") pod \"2091e2aa-66d7-4a1d-806f-d6cc78c18cc4\" (UID: \"2091e2aa-66d7-4a1d-806f-d6cc78c18cc4\") " Mar 12 04:10:49.313722 kubelet[2888]: I0312 04:10:49.313527 2888 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-hubble-tls\") pod \"2091e2aa-66d7-4a1d-806f-d6cc78c18cc4\" (UID: \"2091e2aa-66d7-4a1d-806f-d6cc78c18cc4\") " Mar 12 04:10:49.313722 kubelet[2888]: I0312 04:10:49.313553 2888 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-lib-modules\") pod \"2091e2aa-66d7-4a1d-806f-d6cc78c18cc4\" (UID: \"2091e2aa-66d7-4a1d-806f-d6cc78c18cc4\") " Mar 12 04:10:49.313722 kubelet[2888]: I0312 04:10:49.313652 2888 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e3f0be40-3113-494c-a695-6ee69107bb34-cilium-config-path\") on node \"srv-faxgs.gb1.brightbox.com\" DevicePath \"\"" Mar 12 04:10:49.313722 kubelet[2888]: I0312 04:10:49.313677 2888 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j9tfw\" (UniqueName: \"kubernetes.io/projected/e3f0be40-3113-494c-a695-6ee69107bb34-kube-api-access-j9tfw\") on node \"srv-faxgs.gb1.brightbox.com\" DevicePath \"\"" Mar 12 04:10:49.313722 kubelet[2888]: I0312 04:10:49.313697 2888 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-cilium-run\") on node \"srv-faxgs.gb1.brightbox.com\" DevicePath \"\"" Mar 12 04:10:49.313722 kubelet[2888]: I0312 04:10:49.313713 2888 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-host-proc-sys-kernel\") on node \"srv-faxgs.gb1.brightbox.com\" DevicePath \"\"" Mar 12 04:10:49.314033 kubelet[2888]: I0312 04:10:49.313731 2888 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-etc-cni-netd\") on node \"srv-faxgs.gb1.brightbox.com\" DevicePath \"\"" Mar 12 04:10:49.314033 kubelet[2888]: I0312 04:10:49.313748 2888 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-xtables-lock\") on node \"srv-faxgs.gb1.brightbox.com\" DevicePath \"\"" Mar 12 04:10:49.314033 kubelet[2888]: I0312 04:10:49.313779 2888 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2091e2aa-66d7-4a1d-806f-d6cc78c18cc4" (UID: "2091e2aa-66d7-4a1d-806f-d6cc78c18cc4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 04:10:49.317387 kubelet[2888]: I0312 04:10:49.317328 2888 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-kube-api-access-d9llj" (OuterVolumeSpecName: "kube-api-access-d9llj") pod "2091e2aa-66d7-4a1d-806f-d6cc78c18cc4" (UID: "2091e2aa-66d7-4a1d-806f-d6cc78c18cc4"). InnerVolumeSpecName "kube-api-access-d9llj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 12 04:10:49.317499 kubelet[2888]: I0312 04:10:49.317405 2888 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-cni-path" (OuterVolumeSpecName: "cni-path") pod "2091e2aa-66d7-4a1d-806f-d6cc78c18cc4" (UID: "2091e2aa-66d7-4a1d-806f-d6cc78c18cc4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 04:10:49.317499 kubelet[2888]: I0312 04:10:49.317440 2888 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-hostproc" (OuterVolumeSpecName: "hostproc") pod "2091e2aa-66d7-4a1d-806f-d6cc78c18cc4" (UID: "2091e2aa-66d7-4a1d-806f-d6cc78c18cc4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 04:10:49.317499 kubelet[2888]: I0312 04:10:49.317487 2888 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2091e2aa-66d7-4a1d-806f-d6cc78c18cc4" (UID: "2091e2aa-66d7-4a1d-806f-d6cc78c18cc4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 04:10:49.317704 kubelet[2888]: I0312 04:10:49.317518 2888 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2091e2aa-66d7-4a1d-806f-d6cc78c18cc4" (UID: "2091e2aa-66d7-4a1d-806f-d6cc78c18cc4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 04:10:49.319177 kubelet[2888]: I0312 04:10:49.318797 2888 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2091e2aa-66d7-4a1d-806f-d6cc78c18cc4" (UID: "2091e2aa-66d7-4a1d-806f-d6cc78c18cc4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 12 04:10:49.321605 kubelet[2888]: I0312 04:10:49.321540 2888 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2091e2aa-66d7-4a1d-806f-d6cc78c18cc4" (UID: "2091e2aa-66d7-4a1d-806f-d6cc78c18cc4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 12 04:10:49.321708 kubelet[2888]: I0312 04:10:49.321631 2888 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2091e2aa-66d7-4a1d-806f-d6cc78c18cc4" (UID: "2091e2aa-66d7-4a1d-806f-d6cc78c18cc4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 04:10:49.322734 kubelet[2888]: I0312 04:10:49.322688 2888 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2091e2aa-66d7-4a1d-806f-d6cc78c18cc4" (UID: "2091e2aa-66d7-4a1d-806f-d6cc78c18cc4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 12 04:10:49.414149 kubelet[2888]: I0312 04:10:49.413998 2888 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-bpf-maps\") on node \"srv-faxgs.gb1.brightbox.com\" DevicePath \"\"" Mar 12 04:10:49.414149 kubelet[2888]: I0312 04:10:49.414057 2888 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-cilium-config-path\") on node \"srv-faxgs.gb1.brightbox.com\" DevicePath \"\"" Mar 12 04:10:49.414149 kubelet[2888]: I0312 04:10:49.414079 2888 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-hubble-tls\") on node \"srv-faxgs.gb1.brightbox.com\" DevicePath \"\"" Mar 12 04:10:49.414149 kubelet[2888]: I0312 04:10:49.414097 2888 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-lib-modules\") on node \"srv-faxgs.gb1.brightbox.com\" DevicePath \"\"" Mar 12 04:10:49.414149 kubelet[2888]: I0312 04:10:49.414113 2888 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d9llj\" (UniqueName: \"kubernetes.io/projected/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-kube-api-access-d9llj\") on node \"srv-faxgs.gb1.brightbox.com\" DevicePath \"\"" Mar 12 04:10:49.414149 kubelet[2888]: I0312 04:10:49.414138 2888 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-clustermesh-secrets\") on node \"srv-faxgs.gb1.brightbox.com\" DevicePath \"\"" Mar 12 04:10:49.414149 kubelet[2888]: I0312 04:10:49.414155 2888 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-hostproc\") on node \"srv-faxgs.gb1.brightbox.com\" DevicePath \"\"" Mar 12 04:10:49.414606 kubelet[2888]: I0312 04:10:49.414172 2888 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-host-proc-sys-net\") on node \"srv-faxgs.gb1.brightbox.com\" DevicePath \"\"" Mar 12 04:10:49.414606 kubelet[2888]: I0312 04:10:49.414187 2888 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-cilium-cgroup\") on node \"srv-faxgs.gb1.brightbox.com\" DevicePath \"\"" Mar 12 04:10:49.414606 kubelet[2888]: I0312 04:10:49.414202 2888 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4-cni-path\") on node \"srv-faxgs.gb1.brightbox.com\" DevicePath \"\"" Mar 12 04:10:49.784232 kubelet[2888]: I0312 04:10:49.782601 2888 scope.go:117] "RemoveContainer" containerID="caedbc29638c1d0d58a6287b8afd6ff6803f4c2323db9bf6becba6a19407b118" Mar 12 04:10:49.791346 containerd[1621]: time="2026-03-12T04:10:49.791303017Z" level=info msg="RemoveContainer for \"caedbc29638c1d0d58a6287b8afd6ff6803f4c2323db9bf6becba6a19407b118\"" Mar 12 04:10:49.799203 containerd[1621]: time="2026-03-12T04:10:49.799146174Z" level=info msg="RemoveContainer for \"caedbc29638c1d0d58a6287b8afd6ff6803f4c2323db9bf6becba6a19407b118\" returns successfully" Mar 12 04:10:49.801686 kubelet[2888]: I0312 04:10:49.801611 2888 scope.go:117] "RemoveContainer" containerID="c6ffae743a3a99223b57d115821300040b4effcbe93aec4f1877fd7d7cae17d5" Mar 12 04:10:49.807316 containerd[1621]: time="2026-03-12T04:10:49.807245296Z" level=info msg="RemoveContainer for \"c6ffae743a3a99223b57d115821300040b4effcbe93aec4f1877fd7d7cae17d5\"" Mar 12 04:10:49.813610 containerd[1621]: time="2026-03-12T04:10:49.813174998Z" level=info msg="RemoveContainer for \"c6ffae743a3a99223b57d115821300040b4effcbe93aec4f1877fd7d7cae17d5\" returns successfully" Mar 12 04:10:49.814991 kubelet[2888]: I0312 04:10:49.814165 2888 scope.go:117] "RemoveContainer" containerID="aab4c0e1ca4509cf182bec9afa8c1d0770c2ff44341bf8371cd11e0022c4a13c" Mar 12 04:10:49.817265 containerd[1621]: time="2026-03-12T04:10:49.817131437Z" level=info msg="RemoveContainer for \"aab4c0e1ca4509cf182bec9afa8c1d0770c2ff44341bf8371cd11e0022c4a13c\"" Mar 12 04:10:49.821246 containerd[1621]: time="2026-03-12T04:10:49.821202014Z" level=info msg="RemoveContainer for \"aab4c0e1ca4509cf182bec9afa8c1d0770c2ff44341bf8371cd11e0022c4a13c\" returns successfully" Mar 12 04:10:49.821533 kubelet[2888]: I0312 04:10:49.821420 2888 scope.go:117] "RemoveContainer" containerID="302e5b6e4e7249aa3dfd5279927fe0b9aed27edeb7fc072526334d07465e4336" Mar 12 04:10:49.824895 containerd[1621]: time="2026-03-12T04:10:49.823897049Z" level=info msg="RemoveContainer for \"302e5b6e4e7249aa3dfd5279927fe0b9aed27edeb7fc072526334d07465e4336\"" Mar 12 04:10:49.828776 containerd[1621]: time="2026-03-12T04:10:49.828215673Z" level=info msg="RemoveContainer for \"302e5b6e4e7249aa3dfd5279927fe0b9aed27edeb7fc072526334d07465e4336\" returns successfully" Mar 12 04:10:49.829218 kubelet[2888]: I0312 04:10:49.829081 2888 scope.go:117] "RemoveContainer" containerID="6774863910a881baee677143bca98697f46e05d5ebcb08e9eaa3f519c23b6945" Mar 12 04:10:49.831216 containerd[1621]: time="2026-03-12T04:10:49.831184581Z" level=info msg="RemoveContainer for \"6774863910a881baee677143bca98697f46e05d5ebcb08e9eaa3f519c23b6945\"" Mar 12 04:10:49.834873 containerd[1621]: time="2026-03-12T04:10:49.834776163Z" level=info msg="RemoveContainer for \"6774863910a881baee677143bca98697f46e05d5ebcb08e9eaa3f519c23b6945\" returns successfully" Mar 12 04:10:49.835053 kubelet[2888]: I0312 04:10:49.834983 2888 scope.go:117] "RemoveContainer" containerID="caedbc29638c1d0d58a6287b8afd6ff6803f4c2323db9bf6becba6a19407b118" Mar 12 04:10:49.850276 containerd[1621]: time="2026-03-12T04:10:49.838020810Z" level=error msg="ContainerStatus for \"caedbc29638c1d0d58a6287b8afd6ff6803f4c2323db9bf6becba6a19407b118\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"caedbc29638c1d0d58a6287b8afd6ff6803f4c2323db9bf6becba6a19407b118\": not found" Mar 12 04:10:49.857996 kubelet[2888]: E0312 04:10:49.856990 2888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"caedbc29638c1d0d58a6287b8afd6ff6803f4c2323db9bf6becba6a19407b118\": not found" containerID="caedbc29638c1d0d58a6287b8afd6ff6803f4c2323db9bf6becba6a19407b118" Mar 12 04:10:49.882590 kubelet[2888]: I0312 04:10:49.857068 2888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"caedbc29638c1d0d58a6287b8afd6ff6803f4c2323db9bf6becba6a19407b118"} err="failed to get container status \"caedbc29638c1d0d58a6287b8afd6ff6803f4c2323db9bf6becba6a19407b118\": rpc error: code = NotFound desc = an error occurred when try to find container \"caedbc29638c1d0d58a6287b8afd6ff6803f4c2323db9bf6becba6a19407b118\": not found" Mar 12 04:10:49.883210 kubelet[2888]: I0312 04:10:49.882612 2888 scope.go:117] "RemoveContainer" containerID="c6ffae743a3a99223b57d115821300040b4effcbe93aec4f1877fd7d7cae17d5" Mar 12 04:10:49.883640 containerd[1621]: time="2026-03-12T04:10:49.883490443Z" level=error msg="ContainerStatus for \"c6ffae743a3a99223b57d115821300040b4effcbe93aec4f1877fd7d7cae17d5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c6ffae743a3a99223b57d115821300040b4effcbe93aec4f1877fd7d7cae17d5\": not found" Mar 12 04:10:49.883806 kubelet[2888]: E0312 04:10:49.883764 2888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c6ffae743a3a99223b57d115821300040b4effcbe93aec4f1877fd7d7cae17d5\": not found" containerID="c6ffae743a3a99223b57d115821300040b4effcbe93aec4f1877fd7d7cae17d5" Mar 12 04:10:49.883864 kubelet[2888]: I0312 04:10:49.883813 2888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c6ffae743a3a99223b57d115821300040b4effcbe93aec4f1877fd7d7cae17d5"} err="failed to get container status \"c6ffae743a3a99223b57d115821300040b4effcbe93aec4f1877fd7d7cae17d5\": rpc error: code = NotFound desc = an error occurred when try to find container \"c6ffae743a3a99223b57d115821300040b4effcbe93aec4f1877fd7d7cae17d5\": not found" Mar 12 04:10:49.883864 kubelet[2888]: I0312 04:10:49.883841 2888 scope.go:117] "RemoveContainer" containerID="aab4c0e1ca4509cf182bec9afa8c1d0770c2ff44341bf8371cd11e0022c4a13c" Mar 12 04:10:49.884242 containerd[1621]: time="2026-03-12T04:10:49.884053365Z" level=error msg="ContainerStatus for \"aab4c0e1ca4509cf182bec9afa8c1d0770c2ff44341bf8371cd11e0022c4a13c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aab4c0e1ca4509cf182bec9afa8c1d0770c2ff44341bf8371cd11e0022c4a13c\": not found" Mar 12 04:10:49.884299 kubelet[2888]: E0312 04:10:49.884256 2888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aab4c0e1ca4509cf182bec9afa8c1d0770c2ff44341bf8371cd11e0022c4a13c\": not found" containerID="aab4c0e1ca4509cf182bec9afa8c1d0770c2ff44341bf8371cd11e0022c4a13c" Mar 12 04:10:49.884354 kubelet[2888]: I0312 04:10:49.884298 2888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aab4c0e1ca4509cf182bec9afa8c1d0770c2ff44341bf8371cd11e0022c4a13c"} err="failed to get container status \"aab4c0e1ca4509cf182bec9afa8c1d0770c2ff44341bf8371cd11e0022c4a13c\": rpc error: code = NotFound desc = an error occurred when try to find container \"aab4c0e1ca4509cf182bec9afa8c1d0770c2ff44341bf8371cd11e0022c4a13c\": not found" Mar 12 04:10:49.884354 kubelet[2888]: I0312 04:10:49.884324 2888 scope.go:117] "RemoveContainer" containerID="302e5b6e4e7249aa3dfd5279927fe0b9aed27edeb7fc072526334d07465e4336" Mar 12 04:10:49.884773 containerd[1621]: time="2026-03-12T04:10:49.884717658Z" level=error msg="ContainerStatus for \"302e5b6e4e7249aa3dfd5279927fe0b9aed27edeb7fc072526334d07465e4336\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"302e5b6e4e7249aa3dfd5279927fe0b9aed27edeb7fc072526334d07465e4336\": not found" Mar 12 04:10:49.884908 kubelet[2888]: E0312 04:10:49.884878 2888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"302e5b6e4e7249aa3dfd5279927fe0b9aed27edeb7fc072526334d07465e4336\": not found" containerID="302e5b6e4e7249aa3dfd5279927fe0b9aed27edeb7fc072526334d07465e4336" Mar 12 04:10:49.884982 kubelet[2888]: I0312 04:10:49.884915 2888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"302e5b6e4e7249aa3dfd5279927fe0b9aed27edeb7fc072526334d07465e4336"} err="failed to get container status \"302e5b6e4e7249aa3dfd5279927fe0b9aed27edeb7fc072526334d07465e4336\": rpc error: code = NotFound desc = an error occurred when try to find container \"302e5b6e4e7249aa3dfd5279927fe0b9aed27edeb7fc072526334d07465e4336\": not found" Mar 12 04:10:49.884982 kubelet[2888]: I0312 04:10:49.884941 2888 scope.go:117] "RemoveContainer" containerID="6774863910a881baee677143bca98697f46e05d5ebcb08e9eaa3f519c23b6945" Mar 12 04:10:49.885315 kubelet[2888]: E0312 04:10:49.885277 2888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6774863910a881baee677143bca98697f46e05d5ebcb08e9eaa3f519c23b6945\": not found" containerID="6774863910a881baee677143bca98697f46e05d5ebcb08e9eaa3f519c23b6945" Mar 12 04:10:49.885370 containerd[1621]: time="2026-03-12T04:10:49.885133139Z" level=error msg="ContainerStatus for \"6774863910a881baee677143bca98697f46e05d5ebcb08e9eaa3f519c23b6945\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6774863910a881baee677143bca98697f46e05d5ebcb08e9eaa3f519c23b6945\": not found" Mar 12 04:10:49.885423 kubelet[2888]: I0312 04:10:49.885312 2888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6774863910a881baee677143bca98697f46e05d5ebcb08e9eaa3f519c23b6945"} err="failed to get container status \"6774863910a881baee677143bca98697f46e05d5ebcb08e9eaa3f519c23b6945\": rpc error: code = NotFound desc = an error occurred when try to find container \"6774863910a881baee677143bca98697f46e05d5ebcb08e9eaa3f519c23b6945\": not found" Mar 12 04:10:49.885423 kubelet[2888]: I0312 04:10:49.885335 2888 scope.go:117] "RemoveContainer" containerID="1c243fed214260e1730542f68b61666a825921c924e68b8e4aa9decfa5498c76" Mar 12 04:10:49.887131 containerd[1621]: time="2026-03-12T04:10:49.887099565Z" level=info msg="RemoveContainer for \"1c243fed214260e1730542f68b61666a825921c924e68b8e4aa9decfa5498c76\"" Mar 12 04:10:49.891073 containerd[1621]: time="2026-03-12T04:10:49.890958749Z" level=info msg="RemoveContainer for \"1c243fed214260e1730542f68b61666a825921c924e68b8e4aa9decfa5498c76\" returns successfully" Mar 12 04:10:49.891231 kubelet[2888]: I0312 04:10:49.891201 2888 scope.go:117] "RemoveContainer" containerID="1c243fed214260e1730542f68b61666a825921c924e68b8e4aa9decfa5498c76" Mar 12 04:10:49.891478 containerd[1621]: time="2026-03-12T04:10:49.891436720Z" level=error msg="ContainerStatus for \"1c243fed214260e1730542f68b61666a825921c924e68b8e4aa9decfa5498c76\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1c243fed214260e1730542f68b61666a825921c924e68b8e4aa9decfa5498c76\": not found" Mar 12 04:10:49.891665 kubelet[2888]: E0312 04:10:49.891620 2888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1c243fed214260e1730542f68b61666a825921c924e68b8e4aa9decfa5498c76\": not found" containerID="1c243fed214260e1730542f68b61666a825921c924e68b8e4aa9decfa5498c76" Mar 12 04:10:49.891758 kubelet[2888]: I0312 04:10:49.891671 2888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1c243fed214260e1730542f68b61666a825921c924e68b8e4aa9decfa5498c76"} err="failed to get container status \"1c243fed214260e1730542f68b61666a825921c924e68b8e4aa9decfa5498c76\": rpc error: code = NotFound desc = an error occurred when try to find container \"1c243fed214260e1730542f68b61666a825921c924e68b8e4aa9decfa5498c76\": not found" Mar 12 04:10:49.931186 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4442caf03f5634a5f20ecaa6b4c61e7572261d3e6ffde324a9a0912314a5d2e-rootfs.mount: Deactivated successfully. Mar 12 04:10:49.931425 systemd[1]: var-lib-kubelet-pods-e3f0be40\x2d3113\x2d494c\x2da695\x2d6ee69107bb34-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj9tfw.mount: Deactivated successfully. Mar 12 04:10:49.931647 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-304b179e1afdadacc42fef490fbc35d224fd7b3f9e9dfd4e6367df67bca3688f-rootfs.mount: Deactivated successfully. Mar 12 04:10:49.931846 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-304b179e1afdadacc42fef490fbc35d224fd7b3f9e9dfd4e6367df67bca3688f-shm.mount: Deactivated successfully. Mar 12 04:10:49.932018 systemd[1]: var-lib-kubelet-pods-2091e2aa\x2d66d7\x2d4a1d\x2d806f\x2dd6cc78c18cc4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd9llj.mount: Deactivated successfully. Mar 12 04:10:49.932192 systemd[1]: var-lib-kubelet-pods-2091e2aa\x2d66d7\x2d4a1d\x2d806f\x2dd6cc78c18cc4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 12 04:10:49.932368 systemd[1]: var-lib-kubelet-pods-2091e2aa\x2d66d7\x2d4a1d\x2d806f\x2dd6cc78c18cc4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 12 04:10:50.908661 sshd[4481]: pam_unix(sshd:session): session closed for user core Mar 12 04:10:50.915487 systemd[1]: sshd@24-10.244.26.218:22-20.161.92.111:40134.service: Deactivated successfully. Mar 12 04:10:50.916039 systemd-logind[1599]: Session 27 logged out. Waiting for processes to exit. Mar 12 04:10:50.920488 systemd[1]: session-27.scope: Deactivated successfully. Mar 12 04:10:50.922989 systemd-logind[1599]: Removed session 27. Mar 12 04:10:51.003940 systemd[1]: Started sshd@25-10.244.26.218:22-20.161.92.111:46382.service - OpenSSH per-connection server daemon (20.161.92.111:46382). Mar 12 04:10:51.209969 kubelet[2888]: I0312 04:10:51.209812 2888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2091e2aa-66d7-4a1d-806f-d6cc78c18cc4" path="/var/lib/kubelet/pods/2091e2aa-66d7-4a1d-806f-d6cc78c18cc4/volumes" Mar 12 04:10:51.212735 kubelet[2888]: I0312 04:10:51.212195 2888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3f0be40-3113-494c-a695-6ee69107bb34" path="/var/lib/kubelet/pods/e3f0be40-3113-494c-a695-6ee69107bb34/volumes" Mar 12 04:10:51.431000 kubelet[2888]: E0312 04:10:51.430808 2888 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 12 04:10:51.568599 sshd[4649]: Accepted publickey for core from 20.161.92.111 port 46382 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 04:10:51.570093 sshd[4649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 04:10:51.577703 systemd-logind[1599]: New session 28 of user core. Mar 12 04:10:51.581984 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 12 04:10:53.185213 sshd[4649]: pam_unix(sshd:session): session closed for user core Mar 12 04:10:53.195466 systemd-logind[1599]: Session 28 logged out. Waiting for processes to exit. Mar 12 04:10:53.199468 systemd[1]: sshd@25-10.244.26.218:22-20.161.92.111:46382.service: Deactivated successfully. Mar 12 04:10:53.212857 systemd[1]: session-28.scope: Deactivated successfully. Mar 12 04:10:53.216697 systemd-logind[1599]: Removed session 28. Mar 12 04:10:53.262833 kubelet[2888]: I0312 04:10:53.262779 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ce2ead4-00a9-4ec6-af2e-d27b1c298e14-xtables-lock\") pod \"cilium-2wv24\" (UID: \"5ce2ead4-00a9-4ec6-af2e-d27b1c298e14\") " pod="kube-system/cilium-2wv24" Mar 12 04:10:53.266305 kubelet[2888]: I0312 04:10:53.265691 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ce2ead4-00a9-4ec6-af2e-d27b1c298e14-cilium-config-path\") pod \"cilium-2wv24\" (UID: \"5ce2ead4-00a9-4ec6-af2e-d27b1c298e14\") " pod="kube-system/cilium-2wv24" Mar 12 04:10:53.266305 kubelet[2888]: I0312 04:10:53.265756 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5ce2ead4-00a9-4ec6-af2e-d27b1c298e14-hostproc\") pod \"cilium-2wv24\" (UID: \"5ce2ead4-00a9-4ec6-af2e-d27b1c298e14\") " pod="kube-system/cilium-2wv24" Mar 12 04:10:53.266305 kubelet[2888]: I0312 04:10:53.265788 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5ce2ead4-00a9-4ec6-af2e-d27b1c298e14-cilium-cgroup\") pod \"cilium-2wv24\" (UID: \"5ce2ead4-00a9-4ec6-af2e-d27b1c298e14\") " pod="kube-system/cilium-2wv24" Mar 12 04:10:53.266305 kubelet[2888]: I0312 04:10:53.265820 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5ce2ead4-00a9-4ec6-af2e-d27b1c298e14-hubble-tls\") pod \"cilium-2wv24\" (UID: \"5ce2ead4-00a9-4ec6-af2e-d27b1c298e14\") " pod="kube-system/cilium-2wv24" Mar 12 04:10:53.266305 kubelet[2888]: I0312 04:10:53.265854 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5ce2ead4-00a9-4ec6-af2e-d27b1c298e14-bpf-maps\") pod \"cilium-2wv24\" (UID: \"5ce2ead4-00a9-4ec6-af2e-d27b1c298e14\") " pod="kube-system/cilium-2wv24" Mar 12 04:10:53.266305 kubelet[2888]: I0312 04:10:53.265887 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5ce2ead4-00a9-4ec6-af2e-d27b1c298e14-cilium-ipsec-secrets\") pod \"cilium-2wv24\" (UID: \"5ce2ead4-00a9-4ec6-af2e-d27b1c298e14\") " pod="kube-system/cilium-2wv24" Mar 12 04:10:53.266722 kubelet[2888]: I0312 04:10:53.265917 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5ce2ead4-00a9-4ec6-af2e-d27b1c298e14-host-proc-sys-kernel\") pod \"cilium-2wv24\" (UID: \"5ce2ead4-00a9-4ec6-af2e-d27b1c298e14\") " pod="kube-system/cilium-2wv24" Mar 12 04:10:53.266722 kubelet[2888]: I0312 04:10:53.265944 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnhlt\" (UniqueName: \"kubernetes.io/projected/5ce2ead4-00a9-4ec6-af2e-d27b1c298e14-kube-api-access-pnhlt\") pod \"cilium-2wv24\" (UID: \"5ce2ead4-00a9-4ec6-af2e-d27b1c298e14\") " pod="kube-system/cilium-2wv24" Mar 12 04:10:53.266722 kubelet[2888]: I0312 04:10:53.265995 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5ce2ead4-00a9-4ec6-af2e-d27b1c298e14-clustermesh-secrets\") pod \"cilium-2wv24\" (UID: \"5ce2ead4-00a9-4ec6-af2e-d27b1c298e14\") " pod="kube-system/cilium-2wv24" Mar 12 04:10:53.266722 kubelet[2888]: I0312 04:10:53.266025 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5ce2ead4-00a9-4ec6-af2e-d27b1c298e14-cilium-run\") pod \"cilium-2wv24\" (UID: \"5ce2ead4-00a9-4ec6-af2e-d27b1c298e14\") " pod="kube-system/cilium-2wv24" Mar 12 04:10:53.266722 kubelet[2888]: I0312 04:10:53.266057 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5ce2ead4-00a9-4ec6-af2e-d27b1c298e14-etc-cni-netd\") pod \"cilium-2wv24\" (UID: \"5ce2ead4-00a9-4ec6-af2e-d27b1c298e14\") " pod="kube-system/cilium-2wv24" Mar 12 04:10:53.266722 kubelet[2888]: I0312 04:10:53.266102 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ce2ead4-00a9-4ec6-af2e-d27b1c298e14-lib-modules\") pod \"cilium-2wv24\" (UID: \"5ce2ead4-00a9-4ec6-af2e-d27b1c298e14\") " pod="kube-system/cilium-2wv24" Mar 12 04:10:53.266983 kubelet[2888]: I0312 04:10:53.266138 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5ce2ead4-00a9-4ec6-af2e-d27b1c298e14-host-proc-sys-net\") pod \"cilium-2wv24\" (UID: \"5ce2ead4-00a9-4ec6-af2e-d27b1c298e14\") " pod="kube-system/cilium-2wv24" Mar 12 04:10:53.266983 kubelet[2888]: I0312 04:10:53.266166 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5ce2ead4-00a9-4ec6-af2e-d27b1c298e14-cni-path\") pod \"cilium-2wv24\" (UID: \"5ce2ead4-00a9-4ec6-af2e-d27b1c298e14\") " pod="kube-system/cilium-2wv24" Mar 12 04:10:53.285081 systemd[1]: Started sshd@26-10.244.26.218:22-20.161.92.111:46392.service - OpenSSH per-connection server daemon (20.161.92.111:46392). Mar 12 04:10:53.465068 containerd[1621]: time="2026-03-12T04:10:53.464210787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2wv24,Uid:5ce2ead4-00a9-4ec6-af2e-d27b1c298e14,Namespace:kube-system,Attempt:0,}" Mar 12 04:10:53.498653 containerd[1621]: time="2026-03-12T04:10:53.498501949Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 04:10:53.498880 containerd[1621]: time="2026-03-12T04:10:53.498685149Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 04:10:53.498880 containerd[1621]: time="2026-03-12T04:10:53.498720552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 04:10:53.499193 containerd[1621]: time="2026-03-12T04:10:53.498870260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 04:10:53.555375 containerd[1621]: time="2026-03-12T04:10:53.555303641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2wv24,Uid:5ce2ead4-00a9-4ec6-af2e-d27b1c298e14,Namespace:kube-system,Attempt:0,} returns sandbox id \"86f95a73333f9694f73f2a8d54b965c82cd21665e8d137252fed10d6fbee1d63\"" Mar 12 04:10:53.563339 containerd[1621]: time="2026-03-12T04:10:53.563290211Z" level=info msg="CreateContainer within sandbox \"86f95a73333f9694f73f2a8d54b965c82cd21665e8d137252fed10d6fbee1d63\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 12 04:10:53.575737 containerd[1621]: time="2026-03-12T04:10:53.575670187Z" level=info msg="CreateContainer within sandbox \"86f95a73333f9694f73f2a8d54b965c82cd21665e8d137252fed10d6fbee1d63\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"26106e29823a6bbc88d11d8d6732f55b28fcb80df7684a6512596d1a7d0bdfa7\"" Mar 12 04:10:53.577408 containerd[1621]: time="2026-03-12T04:10:53.577358281Z" level=info msg="StartContainer for \"26106e29823a6bbc88d11d8d6732f55b28fcb80df7684a6512596d1a7d0bdfa7\"" Mar 12 04:10:53.655722 containerd[1621]: time="2026-03-12T04:10:53.655655429Z" level=info msg="StartContainer for \"26106e29823a6bbc88d11d8d6732f55b28fcb80df7684a6512596d1a7d0bdfa7\" returns successfully" Mar 12 04:10:53.718205 containerd[1621]: time="2026-03-12T04:10:53.717825118Z" level=info msg="shim disconnected" id=26106e29823a6bbc88d11d8d6732f55b28fcb80df7684a6512596d1a7d0bdfa7 namespace=k8s.io Mar 12 04:10:53.718205 containerd[1621]: time="2026-03-12T04:10:53.717895905Z" level=warning msg="cleaning up after shim disconnected" id=26106e29823a6bbc88d11d8d6732f55b28fcb80df7684a6512596d1a7d0bdfa7 namespace=k8s.io Mar 12 04:10:53.718205 containerd[1621]: time="2026-03-12T04:10:53.717926510Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 04:10:53.819062 containerd[1621]: time="2026-03-12T04:10:53.818927501Z" level=info msg="CreateContainer within sandbox \"86f95a73333f9694f73f2a8d54b965c82cd21665e8d137252fed10d6fbee1d63\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 12 04:10:53.859311 containerd[1621]: time="2026-03-12T04:10:53.859213546Z" level=info msg="CreateContainer within sandbox \"86f95a73333f9694f73f2a8d54b965c82cd21665e8d137252fed10d6fbee1d63\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"561c5d8a2b5f95890bf4dc9e9fe8a2266a31542e5285943ee8400d683d63b78f\"" Mar 12 04:10:53.860592 containerd[1621]: time="2026-03-12T04:10:53.860533044Z" level=info msg="StartContainer for \"561c5d8a2b5f95890bf4dc9e9fe8a2266a31542e5285943ee8400d683d63b78f\"" Mar 12 04:10:53.873591 sshd[4663]: Accepted publickey for core from 20.161.92.111 port 46392 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 04:10:53.876652 sshd[4663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 04:10:53.896725 systemd-logind[1599]: New session 29 of user core. Mar 12 04:10:53.903539 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 12 04:10:53.965242 containerd[1621]: time="2026-03-12T04:10:53.965164008Z" level=info msg="StartContainer for \"561c5d8a2b5f95890bf4dc9e9fe8a2266a31542e5285943ee8400d683d63b78f\" returns successfully" Mar 12 04:10:54.001905 containerd[1621]: time="2026-03-12T04:10:54.001634706Z" level=info msg="shim disconnected" id=561c5d8a2b5f95890bf4dc9e9fe8a2266a31542e5285943ee8400d683d63b78f namespace=k8s.io Mar 12 04:10:54.001905 containerd[1621]: time="2026-03-12T04:10:54.001799326Z" level=warning msg="cleaning up after shim disconnected" id=561c5d8a2b5f95890bf4dc9e9fe8a2266a31542e5285943ee8400d683d63b78f namespace=k8s.io Mar 12 04:10:54.001905 containerd[1621]: time="2026-03-12T04:10:54.001820987Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 04:10:54.267016 sshd[4663]: pam_unix(sshd:session): session closed for user core Mar 12 04:10:54.273400 systemd[1]: sshd@26-10.244.26.218:22-20.161.92.111:46392.service: Deactivated successfully. Mar 12 04:10:54.278482 systemd[1]: session-29.scope: Deactivated successfully. Mar 12 04:10:54.280201 systemd-logind[1599]: Session 29 logged out. Waiting for processes to exit. Mar 12 04:10:54.282358 systemd-logind[1599]: Removed session 29. Mar 12 04:10:54.362864 systemd[1]: Started sshd@27-10.244.26.218:22-20.161.92.111:46400.service - OpenSSH per-connection server daemon (20.161.92.111:46400). Mar 12 04:10:54.471351 kubelet[2888]: I0312 04:10:54.471280 2888 setters.go:618] "Node became not ready" node="srv-faxgs.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T04:10:54Z","lastTransitionTime":"2026-03-12T04:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 12 04:10:54.822848 containerd[1621]: time="2026-03-12T04:10:54.822781093Z" level=info msg="CreateContainer within sandbox \"86f95a73333f9694f73f2a8d54b965c82cd21665e8d137252fed10d6fbee1d63\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 12 04:10:54.848661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1089688463.mount: Deactivated successfully. Mar 12 04:10:54.853027 containerd[1621]: time="2026-03-12T04:10:54.850549112Z" level=info msg="CreateContainer within sandbox \"86f95a73333f9694f73f2a8d54b965c82cd21665e8d137252fed10d6fbee1d63\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"339c77367b0122d32231f9db46f8a2612e6c5c3587bd40336d4ac836970f6e9a\"" Mar 12 04:10:54.854220 containerd[1621]: time="2026-03-12T04:10:54.854172061Z" level=info msg="StartContainer for \"339c77367b0122d32231f9db46f8a2612e6c5c3587bd40336d4ac836970f6e9a\"" Mar 12 04:10:54.933613 sshd[4839]: Accepted publickey for core from 20.161.92.111 port 46400 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 04:10:54.935683 sshd[4839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 04:10:54.949379 containerd[1621]: time="2026-03-12T04:10:54.949193775Z" level=info msg="StartContainer for \"339c77367b0122d32231f9db46f8a2612e6c5c3587bd40336d4ac836970f6e9a\" returns successfully" Mar 12 04:10:54.952404 systemd-logind[1599]: New session 30 of user core. Mar 12 04:10:54.958051 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 12 04:10:55.000009 containerd[1621]: time="2026-03-12T04:10:54.999897030Z" level=info msg="shim disconnected" id=339c77367b0122d32231f9db46f8a2612e6c5c3587bd40336d4ac836970f6e9a namespace=k8s.io Mar 12 04:10:55.000009 containerd[1621]: time="2026-03-12T04:10:54.999995906Z" level=warning msg="cleaning up after shim disconnected" id=339c77367b0122d32231f9db46f8a2612e6c5c3587bd40336d4ac836970f6e9a namespace=k8s.io Mar 12 04:10:55.000009 containerd[1621]: time="2026-03-12T04:10:55.000016356Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 04:10:55.385433 systemd[1]: run-containerd-runc-k8s.io-339c77367b0122d32231f9db46f8a2612e6c5c3587bd40336d4ac836970f6e9a-runc.gqFotu.mount: Deactivated successfully. Mar 12 04:10:55.385712 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-339c77367b0122d32231f9db46f8a2612e6c5c3587bd40336d4ac836970f6e9a-rootfs.mount: Deactivated successfully. Mar 12 04:10:55.824350 containerd[1621]: time="2026-03-12T04:10:55.824294472Z" level=info msg="CreateContainer within sandbox \"86f95a73333f9694f73f2a8d54b965c82cd21665e8d137252fed10d6fbee1d63\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 12 04:10:55.848803 containerd[1621]: time="2026-03-12T04:10:55.848748878Z" level=info msg="CreateContainer within sandbox \"86f95a73333f9694f73f2a8d54b965c82cd21665e8d137252fed10d6fbee1d63\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6b74734d0f20c70627cca05488ffde29f748576d0b97654f4c701439721ae4b1\"" Mar 12 04:10:55.850659 containerd[1621]: time="2026-03-12T04:10:55.849771330Z" level=info msg="StartContainer for \"6b74734d0f20c70627cca05488ffde29f748576d0b97654f4c701439721ae4b1\"" Mar 12 04:10:55.935903 containerd[1621]: time="2026-03-12T04:10:55.935853575Z" level=info msg="StartContainer for \"6b74734d0f20c70627cca05488ffde29f748576d0b97654f4c701439721ae4b1\" returns successfully" Mar 12 04:10:55.962048 containerd[1621]: time="2026-03-12T04:10:55.961749057Z" level=info msg="shim disconnected" id=6b74734d0f20c70627cca05488ffde29f748576d0b97654f4c701439721ae4b1 namespace=k8s.io Mar 12 04:10:55.962048 containerd[1621]: time="2026-03-12T04:10:55.961831894Z" level=warning msg="cleaning up after shim disconnected" id=6b74734d0f20c70627cca05488ffde29f748576d0b97654f4c701439721ae4b1 namespace=k8s.io Mar 12 04:10:55.962048 containerd[1621]: time="2026-03-12T04:10:55.961862779Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 04:10:56.383433 systemd[1]: run-containerd-runc-k8s.io-6b74734d0f20c70627cca05488ffde29f748576d0b97654f4c701439721ae4b1-runc.K3DAVs.mount: Deactivated successfully. Mar 12 04:10:56.383712 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b74734d0f20c70627cca05488ffde29f748576d0b97654f4c701439721ae4b1-rootfs.mount: Deactivated successfully. Mar 12 04:10:56.431915 kubelet[2888]: E0312 04:10:56.431821 2888 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 12 04:10:56.833995 containerd[1621]: time="2026-03-12T04:10:56.833909349Z" level=info msg="CreateContainer within sandbox \"86f95a73333f9694f73f2a8d54b965c82cd21665e8d137252fed10d6fbee1d63\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 12 04:10:56.858868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount227372244.mount: Deactivated successfully. Mar 12 04:10:56.860519 containerd[1621]: time="2026-03-12T04:10:56.860470766Z" level=info msg="CreateContainer within sandbox \"86f95a73333f9694f73f2a8d54b965c82cd21665e8d137252fed10d6fbee1d63\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f10901b9add2c33b843e7614bdc0ad8b92057420d570e65687aa7352e46f770e\"" Mar 12 04:10:56.861446 containerd[1621]: time="2026-03-12T04:10:56.861402332Z" level=info msg="StartContainer for \"f10901b9add2c33b843e7614bdc0ad8b92057420d570e65687aa7352e46f770e\"" Mar 12 04:10:56.948100 containerd[1621]: time="2026-03-12T04:10:56.948052116Z" level=info msg="StartContainer for \"f10901b9add2c33b843e7614bdc0ad8b92057420d570e65687aa7352e46f770e\" returns successfully" Mar 12 04:10:57.386110 systemd[1]: run-containerd-runc-k8s.io-f10901b9add2c33b843e7614bdc0ad8b92057420d570e65687aa7352e46f770e-runc.bw5Urf.mount: Deactivated successfully. Mar 12 04:10:57.715962 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 12 04:11:00.018386 systemd[1]: run-containerd-runc-k8s.io-f10901b9add2c33b843e7614bdc0ad8b92057420d570e65687aa7352e46f770e-runc.ObsGcj.mount: Deactivated successfully. Mar 12 04:11:01.671269 systemd-networkd[1260]: lxc_health: Link UP Mar 12 04:11:01.685213 systemd-networkd[1260]: lxc_health: Gained carrier Mar 12 04:11:03.043838 systemd-networkd[1260]: lxc_health: Gained IPv6LL Mar 12 04:11:03.521794 kubelet[2888]: I0312 04:11:03.521603 2888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2wv24" podStartSLOduration=10.521578293 podStartE2EDuration="10.521578293s" podCreationTimestamp="2026-03-12 04:10:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 04:10:57.864026626 +0000 UTC m=+136.980269937" watchObservedRunningTime="2026-03-12 04:11:03.521578293 +0000 UTC m=+142.637821595" Mar 12 04:11:06.762743 systemd[1]: run-containerd-runc-k8s.io-f10901b9add2c33b843e7614bdc0ad8b92057420d570e65687aa7352e46f770e-runc.fmPwPX.mount: Deactivated successfully. Mar 12 04:11:09.182757 sshd[4839]: pam_unix(sshd:session): session closed for user core Mar 12 04:11:09.192197 systemd[1]: sshd@27-10.244.26.218:22-20.161.92.111:46400.service: Deactivated successfully. Mar 12 04:11:09.202065 systemd[1]: session-30.scope: Deactivated successfully. Mar 12 04:11:09.204195 systemd-logind[1599]: Session 30 logged out. Waiting for processes to exit. Mar 12 04:11:09.208243 systemd-logind[1599]: Removed session 30.