Apr 13 20:11:12.037268 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 13 18:40:27 -00 2026 Apr 13 20:11:12.037302 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:11:12.037316 kernel: BIOS-provided physical RAM map: Apr 13 20:11:12.037355 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 13 20:11:12.037365 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 13 20:11:12.037386 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 13 20:11:12.037398 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Apr 13 20:11:12.037409 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Apr 13 20:11:12.037419 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 13 20:11:12.037429 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 13 20:11:12.037439 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 13 20:11:12.037449 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 13 20:11:12.037465 kernel: NX (Execute Disable) protection: active Apr 13 20:11:12.037476 kernel: APIC: Static calls initialized Apr 13 20:11:12.037488 kernel: SMBIOS 2.8 present. Apr 13 20:11:12.037500 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Apr 13 20:11:12.037511 kernel: Hypervisor detected: KVM Apr 13 20:11:12.037527 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 13 20:11:12.037539 kernel: kvm-clock: using sched offset of 4681568956 cycles Apr 13 20:11:12.037551 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 13 20:11:12.037562 kernel: tsc: Detected 2499.998 MHz processor Apr 13 20:11:12.037574 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 13 20:11:12.037586 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 13 20:11:12.037597 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Apr 13 20:11:12.037608 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 13 20:11:12.037619 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 13 20:11:12.037636 kernel: Using GB pages for direct mapping Apr 13 20:11:12.037647 kernel: ACPI: Early table checksum verification disabled Apr 13 20:11:12.037659 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Apr 13 20:11:12.037670 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:11:12.037681 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:11:12.037693 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:11:12.037704 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Apr 13 20:11:12.037715 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:11:12.037726 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:11:12.037742 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:11:12.037754 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:11:12.037765 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Apr 13 20:11:12.037777 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Apr 13 20:11:12.037788 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Apr 13 20:11:12.037806 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Apr 13 20:11:12.037818 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Apr 13 20:11:12.037835 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Apr 13 20:11:12.037847 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Apr 13 20:11:12.037858 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 13 20:11:12.037870 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 13 20:11:12.037882 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Apr 13 20:11:12.037894 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Apr 13 20:11:12.037905 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Apr 13 20:11:12.037917 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Apr 13 20:11:12.037934 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Apr 13 20:11:12.037946 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Apr 13 20:11:12.037957 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Apr 13 20:11:12.037969 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Apr 13 20:11:12.037981 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Apr 13 20:11:12.037992 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Apr 13 20:11:12.038004 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Apr 13 20:11:12.038015 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Apr 13 20:11:12.038027 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Apr 13 20:11:12.038044 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Apr 13 20:11:12.038056 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Apr 13 20:11:12.038068 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Apr 13 20:11:12.038079 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Apr 13 20:11:12.038091 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Apr 13 20:11:12.038103 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Apr 13 20:11:12.038115 kernel: Zone ranges: Apr 13 20:11:12.038127 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 13 20:11:12.038139 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Apr 13 20:11:12.038156 kernel: Normal empty Apr 13 20:11:12.038168 kernel: Movable zone start for each node Apr 13 20:11:12.038179 kernel: Early memory node ranges Apr 13 20:11:12.038191 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 13 20:11:12.038203 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Apr 13 20:11:12.038215 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Apr 13 20:11:12.038226 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 13 20:11:12.038238 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 13 20:11:12.038250 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Apr 13 20:11:12.038262 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 13 20:11:12.038278 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 13 20:11:12.038290 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 13 20:11:12.038302 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 13 20:11:12.038314 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 13 20:11:12.041361 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 13 20:11:12.041391 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 13 20:11:12.041404 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 13 20:11:12.041416 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 13 20:11:12.041428 kernel: TSC deadline timer available Apr 13 20:11:12.041449 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Apr 13 20:11:12.041461 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 13 20:11:12.041473 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 13 20:11:12.041485 kernel: Booting paravirtualized kernel on KVM Apr 13 20:11:12.041497 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 13 20:11:12.041510 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Apr 13 20:11:12.041522 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u262144 Apr 13 20:11:12.041533 kernel: pcpu-alloc: s196328 r8192 d28952 u262144 alloc=1*2097152 Apr 13 20:11:12.041545 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Apr 13 20:11:12.041563 kernel: kvm-guest: PV spinlocks enabled Apr 13 20:11:12.041575 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 13 20:11:12.041589 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:11:12.041601 kernel: random: crng init done Apr 13 20:11:12.041613 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 13 20:11:12.041625 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 13 20:11:12.041637 kernel: Fallback order for Node 0: 0 Apr 13 20:11:12.041649 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Apr 13 20:11:12.041666 kernel: Policy zone: DMA32 Apr 13 20:11:12.041678 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 13 20:11:12.041690 kernel: software IO TLB: area num 16. Apr 13 20:11:12.041702 kernel: Memory: 1901596K/2096616K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 194760K reserved, 0K cma-reserved) Apr 13 20:11:12.041714 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Apr 13 20:11:12.041726 kernel: Kernel/User page tables isolation: enabled Apr 13 20:11:12.041738 kernel: ftrace: allocating 37996 entries in 149 pages Apr 13 20:11:12.041750 kernel: ftrace: allocated 149 pages with 4 groups Apr 13 20:11:12.041761 kernel: Dynamic Preempt: voluntary Apr 13 20:11:12.041778 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 13 20:11:12.041791 kernel: rcu: RCU event tracing is enabled. Apr 13 20:11:12.041804 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Apr 13 20:11:12.041816 kernel: Trampoline variant of Tasks RCU enabled. Apr 13 20:11:12.041828 kernel: Rude variant of Tasks RCU enabled. Apr 13 20:11:12.041853 kernel: Tracing variant of Tasks RCU enabled. Apr 13 20:11:12.041870 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 13 20:11:12.041883 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Apr 13 20:11:12.041895 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Apr 13 20:11:12.041908 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 13 20:11:12.041920 kernel: Console: colour VGA+ 80x25 Apr 13 20:11:12.041933 kernel: printk: console [tty0] enabled Apr 13 20:11:12.041950 kernel: printk: console [ttyS0] enabled Apr 13 20:11:12.041963 kernel: ACPI: Core revision 20230628 Apr 13 20:11:12.041976 kernel: APIC: Switch to symmetric I/O mode setup Apr 13 20:11:12.041988 kernel: x2apic enabled Apr 13 20:11:12.042001 kernel: APIC: Switched APIC routing to: physical x2apic Apr 13 20:11:12.042018 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Apr 13 20:11:12.042032 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Apr 13 20:11:12.042044 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 13 20:11:12.042057 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 13 20:11:12.042069 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 13 20:11:12.042082 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 13 20:11:12.042094 kernel: Spectre V2 : Mitigation: Retpolines Apr 13 20:11:12.042106 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 13 20:11:12.042118 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Apr 13 20:11:12.042131 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 13 20:11:12.042148 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 13 20:11:12.042161 kernel: MDS: Mitigation: Clear CPU buffers Apr 13 20:11:12.042173 kernel: MMIO Stale Data: Unknown: No mitigations Apr 13 20:11:12.042185 kernel: SRBDS: Unknown: Dependent on hypervisor status Apr 13 20:11:12.042197 kernel: active return thunk: its_return_thunk Apr 13 20:11:12.042210 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 13 20:11:12.042222 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 13 20:11:12.042234 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 13 20:11:12.042247 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 13 20:11:12.042259 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 13 20:11:12.042272 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Apr 13 20:11:12.042289 kernel: Freeing SMP alternatives memory: 32K Apr 13 20:11:12.042301 kernel: pid_max: default: 32768 minimum: 301 Apr 13 20:11:12.042314 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 13 20:11:12.042345 kernel: landlock: Up and running. Apr 13 20:11:12.042358 kernel: SELinux: Initializing. Apr 13 20:11:12.042380 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 13 20:11:12.042395 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 13 20:11:12.042407 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Apr 13 20:11:12.042420 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Apr 13 20:11:12.042433 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Apr 13 20:11:12.042445 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Apr 13 20:11:12.042465 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Apr 13 20:11:12.042478 kernel: signal: max sigframe size: 1776 Apr 13 20:11:12.042491 kernel: rcu: Hierarchical SRCU implementation. Apr 13 20:11:12.042503 kernel: rcu: Max phase no-delay instances is 400. Apr 13 20:11:12.042516 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 13 20:11:12.042529 kernel: smp: Bringing up secondary CPUs ... Apr 13 20:11:12.042541 kernel: smpboot: x86: Booting SMP configuration: Apr 13 20:11:12.042554 kernel: .... node #0, CPUs: #1 Apr 13 20:11:12.042566 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Apr 13 20:11:12.042584 kernel: smp: Brought up 1 node, 2 CPUs Apr 13 20:11:12.042597 kernel: smpboot: Max logical packages: 16 Apr 13 20:11:12.042609 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Apr 13 20:11:12.042622 kernel: devtmpfs: initialized Apr 13 20:11:12.042634 kernel: x86/mm: Memory block size: 128MB Apr 13 20:11:12.042647 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 13 20:11:12.042660 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Apr 13 20:11:12.042672 kernel: pinctrl core: initialized pinctrl subsystem Apr 13 20:11:12.042685 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 13 20:11:12.042702 kernel: audit: initializing netlink subsys (disabled) Apr 13 20:11:12.042715 kernel: audit: type=2000 audit(1776111071.215:1): state=initialized audit_enabled=0 res=1 Apr 13 20:11:12.042727 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 13 20:11:12.042740 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 13 20:11:12.042752 kernel: cpuidle: using governor menu Apr 13 20:11:12.042764 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 13 20:11:12.042777 kernel: dca service started, version 1.12.1 Apr 13 20:11:12.042789 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 13 20:11:12.042802 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 13 20:11:12.042819 kernel: PCI: Using configuration type 1 for base access Apr 13 20:11:12.042832 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 13 20:11:12.042845 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 13 20:11:12.042857 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 13 20:11:12.042870 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 13 20:11:12.042883 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 13 20:11:12.042895 kernel: ACPI: Added _OSI(Module Device) Apr 13 20:11:12.042907 kernel: ACPI: Added _OSI(Processor Device) Apr 13 20:11:12.042925 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 13 20:11:12.042938 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 13 20:11:12.042950 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 13 20:11:12.042963 kernel: ACPI: Interpreter enabled Apr 13 20:11:12.042975 kernel: ACPI: PM: (supports S0 S5) Apr 13 20:11:12.042988 kernel: ACPI: Using IOAPIC for interrupt routing Apr 13 20:11:12.043001 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 13 20:11:12.043013 kernel: PCI: Using E820 reservations for host bridge windows Apr 13 20:11:12.043026 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 13 20:11:12.043039 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 13 20:11:12.045369 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 13 20:11:12.045620 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 13 20:11:12.045804 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 13 20:11:12.045825 kernel: PCI host bridge to bus 0000:00 Apr 13 20:11:12.046009 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 13 20:11:12.046172 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 13 20:11:12.046362 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 13 20:11:12.046539 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Apr 13 20:11:12.046697 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 13 20:11:12.046854 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Apr 13 20:11:12.047010 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 13 20:11:12.047216 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 13 20:11:12.049554 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Apr 13 20:11:12.049771 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Apr 13 20:11:12.049952 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Apr 13 20:11:12.050141 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Apr 13 20:11:12.050392 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 13 20:11:12.050614 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Apr 13 20:11:12.050792 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Apr 13 20:11:12.051002 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Apr 13 20:11:12.051179 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Apr 13 20:11:12.053049 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Apr 13 20:11:12.053252 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Apr 13 20:11:12.054546 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Apr 13 20:11:12.054729 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Apr 13 20:11:12.054927 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Apr 13 20:11:12.055107 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Apr 13 20:11:12.055295 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Apr 13 20:11:12.055506 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Apr 13 20:11:12.055693 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Apr 13 20:11:12.055870 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Apr 13 20:11:12.056064 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Apr 13 20:11:12.056241 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Apr 13 20:11:12.058499 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Apr 13 20:11:12.058679 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Apr 13 20:11:12.058854 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Apr 13 20:11:12.059028 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Apr 13 20:11:12.059203 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Apr 13 20:11:12.060449 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Apr 13 20:11:12.060627 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Apr 13 20:11:12.060798 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Apr 13 20:11:12.060969 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Apr 13 20:11:12.061152 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 13 20:11:12.063353 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 13 20:11:12.063560 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 13 20:11:12.063742 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Apr 13 20:11:12.063912 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Apr 13 20:11:12.064093 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 13 20:11:12.064264 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 13 20:11:12.064490 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Apr 13 20:11:12.064671 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Apr 13 20:11:12.064858 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Apr 13 20:11:12.065032 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Apr 13 20:11:12.065206 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Apr 13 20:11:12.067465 kernel: pci_bus 0000:02: extended config space not accessible Apr 13 20:11:12.067670 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Apr 13 20:11:12.067856 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Apr 13 20:11:12.068044 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Apr 13 20:11:12.068221 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Apr 13 20:11:12.068488 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Apr 13 20:11:12.068668 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Apr 13 20:11:12.068842 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Apr 13 20:11:12.069011 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Apr 13 20:11:12.069182 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Apr 13 20:11:12.070426 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Apr 13 20:11:12.070622 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Apr 13 20:11:12.070801 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Apr 13 20:11:12.070974 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Apr 13 20:11:12.071145 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Apr 13 20:11:12.071341 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Apr 13 20:11:12.071530 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Apr 13 20:11:12.071700 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Apr 13 20:11:12.071882 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Apr 13 20:11:12.072051 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Apr 13 20:11:12.072220 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Apr 13 20:11:12.075437 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Apr 13 20:11:12.075611 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Apr 13 20:11:12.075782 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Apr 13 20:11:12.075978 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Apr 13 20:11:12.076147 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Apr 13 20:11:12.076328 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Apr 13 20:11:12.076553 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Apr 13 20:11:12.076739 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Apr 13 20:11:12.076946 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Apr 13 20:11:12.076965 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 13 20:11:12.076978 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 13 20:11:12.077001 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 13 20:11:12.077014 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 13 20:11:12.077027 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 13 20:11:12.077048 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 13 20:11:12.077061 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 13 20:11:12.077074 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 13 20:11:12.077086 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 13 20:11:12.077099 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 13 20:11:12.077112 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 13 20:11:12.077125 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 13 20:11:12.077137 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 13 20:11:12.077150 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 13 20:11:12.077168 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 13 20:11:12.077181 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 13 20:11:12.077194 kernel: iommu: Default domain type: Translated Apr 13 20:11:12.077206 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 13 20:11:12.077219 kernel: PCI: Using ACPI for IRQ routing Apr 13 20:11:12.077232 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 13 20:11:12.077244 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 13 20:11:12.077257 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Apr 13 20:11:12.079493 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 13 20:11:12.079770 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 13 20:11:12.080017 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 13 20:11:12.080038 kernel: vgaarb: loaded Apr 13 20:11:12.080051 kernel: clocksource: Switched to clocksource kvm-clock Apr 13 20:11:12.080063 kernel: VFS: Disk quotas dquot_6.6.0 Apr 13 20:11:12.080089 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 13 20:11:12.080109 kernel: pnp: PnP ACPI init Apr 13 20:11:12.080351 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 13 20:11:12.080394 kernel: pnp: PnP ACPI: found 5 devices Apr 13 20:11:12.080408 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 13 20:11:12.080421 kernel: NET: Registered PF_INET protocol family Apr 13 20:11:12.080434 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 13 20:11:12.080447 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Apr 13 20:11:12.080460 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 13 20:11:12.080473 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 13 20:11:12.080485 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 13 20:11:12.080504 kernel: TCP: Hash tables configured (established 16384 bind 16384) Apr 13 20:11:12.080517 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 13 20:11:12.080530 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 13 20:11:12.080543 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 13 20:11:12.080556 kernel: NET: Registered PF_XDP protocol family Apr 13 20:11:12.080744 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Apr 13 20:11:12.080921 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Apr 13 20:11:12.081108 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Apr 13 20:11:12.081303 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Apr 13 20:11:12.083546 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Apr 13 20:11:12.083721 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 13 20:11:12.083891 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 13 20:11:12.084062 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 13 20:11:12.084232 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Apr 13 20:11:12.084491 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Apr 13 20:11:12.084663 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Apr 13 20:11:12.084833 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Apr 13 20:11:12.085001 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Apr 13 20:11:12.085169 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Apr 13 20:11:12.087365 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Apr 13 20:11:12.087555 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Apr 13 20:11:12.087735 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Apr 13 20:11:12.087941 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Apr 13 20:11:12.088112 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Apr 13 20:11:12.088283 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Apr 13 20:11:12.088487 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Apr 13 20:11:12.088661 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Apr 13 20:11:12.088833 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Apr 13 20:11:12.089004 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Apr 13 20:11:12.089176 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Apr 13 20:11:12.089452 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Apr 13 20:11:12.089664 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Apr 13 20:11:12.089843 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Apr 13 20:11:12.090019 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Apr 13 20:11:12.090204 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Apr 13 20:11:12.090432 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Apr 13 20:11:12.090617 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Apr 13 20:11:12.090789 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Apr 13 20:11:12.090961 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Apr 13 20:11:12.091135 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Apr 13 20:11:12.091308 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Apr 13 20:11:12.091543 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Apr 13 20:11:12.091714 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Apr 13 20:11:12.091884 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Apr 13 20:11:12.092056 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Apr 13 20:11:12.092257 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Apr 13 20:11:12.092465 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Apr 13 20:11:12.092658 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Apr 13 20:11:12.092841 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Apr 13 20:11:12.093053 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Apr 13 20:11:12.093236 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Apr 13 20:11:12.093453 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Apr 13 20:11:12.093623 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Apr 13 20:11:12.093793 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Apr 13 20:11:12.093963 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Apr 13 20:11:12.094123 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 13 20:11:12.094278 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 13 20:11:12.094468 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 13 20:11:12.094630 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Apr 13 20:11:12.094792 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 13 20:11:12.094945 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Apr 13 20:11:12.095130 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Apr 13 20:11:12.095293 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Apr 13 20:11:12.095535 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Apr 13 20:11:12.095710 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Apr 13 20:11:12.095883 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Apr 13 20:11:12.096054 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Apr 13 20:11:12.096214 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Apr 13 20:11:12.096415 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Apr 13 20:11:12.096582 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Apr 13 20:11:12.096744 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Apr 13 20:11:12.096933 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Apr 13 20:11:12.097105 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Apr 13 20:11:12.097267 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Apr 13 20:11:12.097560 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Apr 13 20:11:12.097724 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Apr 13 20:11:12.097885 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Apr 13 20:11:12.098055 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Apr 13 20:11:12.098215 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Apr 13 20:11:12.098423 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Apr 13 20:11:12.098606 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Apr 13 20:11:12.098768 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Apr 13 20:11:12.098928 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Apr 13 20:11:12.099098 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Apr 13 20:11:12.099259 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Apr 13 20:11:12.099507 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Apr 13 20:11:12.099537 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 13 20:11:12.099552 kernel: PCI: CLS 0 bytes, default 64 Apr 13 20:11:12.099566 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 13 20:11:12.099579 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Apr 13 20:11:12.099593 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 13 20:11:12.099606 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Apr 13 20:11:12.099620 kernel: Initialise system trusted keyrings Apr 13 20:11:12.099633 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Apr 13 20:11:12.099647 kernel: Key type asymmetric registered Apr 13 20:11:12.099666 kernel: Asymmetric key parser 'x509' registered Apr 13 20:11:12.099679 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 13 20:11:12.099692 kernel: io scheduler mq-deadline registered Apr 13 20:11:12.099705 kernel: io scheduler kyber registered Apr 13 20:11:12.099719 kernel: io scheduler bfq registered Apr 13 20:11:12.099890 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Apr 13 20:11:12.100064 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Apr 13 20:11:12.100235 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 20:11:12.100444 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Apr 13 20:11:12.100613 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Apr 13 20:11:12.100784 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 20:11:12.100953 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Apr 13 20:11:12.101123 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Apr 13 20:11:12.101294 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 20:11:12.101524 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Apr 13 20:11:12.101696 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Apr 13 20:11:12.101865 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 20:11:12.102035 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Apr 13 20:11:12.102204 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Apr 13 20:11:12.102408 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 20:11:12.102590 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Apr 13 20:11:12.102761 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Apr 13 20:11:12.102932 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 20:11:12.103104 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Apr 13 20:11:12.103274 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Apr 13 20:11:12.103504 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 20:11:12.103687 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Apr 13 20:11:12.103857 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Apr 13 20:11:12.104027 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 13 20:11:12.104048 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 13 20:11:12.104063 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 13 20:11:12.104077 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 13 20:11:12.104090 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 13 20:11:12.104114 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 13 20:11:12.104128 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 13 20:11:12.104142 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 13 20:11:12.104155 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 13 20:11:12.104168 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 13 20:11:12.104390 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 13 20:11:12.104559 kernel: rtc_cmos 00:03: registered as rtc0 Apr 13 20:11:12.104719 kernel: rtc_cmos 00:03: setting system clock to 2026-04-13T20:11:11 UTC (1776111071) Apr 13 20:11:12.104887 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Apr 13 20:11:12.104907 kernel: intel_pstate: CPU model not supported Apr 13 20:11:12.104921 kernel: NET: Registered PF_INET6 protocol family Apr 13 20:11:12.104934 kernel: Segment Routing with IPv6 Apr 13 20:11:12.104948 kernel: In-situ OAM (IOAM) with IPv6 Apr 13 20:11:12.104961 kernel: NET: Registered PF_PACKET protocol family Apr 13 20:11:12.104975 kernel: Key type dns_resolver registered Apr 13 20:11:12.104988 kernel: IPI shorthand broadcast: enabled Apr 13 20:11:12.105001 kernel: sched_clock: Marking stable (1249003827, 230431619)->(1618249643, -138814197) Apr 13 20:11:12.105023 kernel: registered taskstats version 1 Apr 13 20:11:12.105036 kernel: Loading compiled-in X.509 certificates Apr 13 20:11:12.105049 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51221ce98a81ccf90ef3d16403b42695603c5d00' Apr 13 20:11:12.105063 kernel: Key type .fscrypt registered Apr 13 20:11:12.105076 kernel: Key type fscrypt-provisioning registered Apr 13 20:11:12.105089 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 13 20:11:12.105102 kernel: ima: Allocated hash algorithm: sha1 Apr 13 20:11:12.105115 kernel: ima: No architecture policies found Apr 13 20:11:12.105128 kernel: clk: Disabling unused clocks Apr 13 20:11:12.105147 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 13 20:11:12.105161 kernel: Write protecting the kernel read-only data: 36864k Apr 13 20:11:12.105174 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 13 20:11:12.105187 kernel: Run /init as init process Apr 13 20:11:12.105200 kernel: with arguments: Apr 13 20:11:12.105214 kernel: /init Apr 13 20:11:12.105226 kernel: with environment: Apr 13 20:11:12.105239 kernel: HOME=/ Apr 13 20:11:12.105252 kernel: TERM=linux Apr 13 20:11:12.105273 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 20:11:12.105290 systemd[1]: Detected virtualization kvm. Apr 13 20:11:12.105304 systemd[1]: Detected architecture x86-64. Apr 13 20:11:12.105318 systemd[1]: Running in initrd. Apr 13 20:11:12.105358 systemd[1]: No hostname configured, using default hostname. Apr 13 20:11:12.105383 systemd[1]: Hostname set to . Apr 13 20:11:12.105399 systemd[1]: Initializing machine ID from VM UUID. Apr 13 20:11:12.105421 systemd[1]: Queued start job for default target initrd.target. Apr 13 20:11:12.105435 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:11:12.105449 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:11:12.105464 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 13 20:11:12.105479 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 20:11:12.105493 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 13 20:11:12.105508 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 13 20:11:12.105529 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 13 20:11:12.105544 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 13 20:11:12.105559 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:11:12.105573 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:11:12.105587 systemd[1]: Reached target paths.target - Path Units. Apr 13 20:11:12.105601 systemd[1]: Reached target slices.target - Slice Units. Apr 13 20:11:12.105615 systemd[1]: Reached target swap.target - Swaps. Apr 13 20:11:12.105629 systemd[1]: Reached target timers.target - Timer Units. Apr 13 20:11:12.105649 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 20:11:12.105663 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 20:11:12.105683 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 20:11:12.105698 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 20:11:12.105712 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:11:12.105727 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 20:11:12.105741 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:11:12.105755 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 20:11:12.105770 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 13 20:11:12.105790 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 20:11:12.105804 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 13 20:11:12.105818 systemd[1]: Starting systemd-fsck-usr.service... Apr 13 20:11:12.105832 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 20:11:12.105847 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 20:11:12.105861 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:11:12.105875 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 13 20:11:12.105889 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:11:12.105953 systemd-journald[202]: Collecting audit messages is disabled. Apr 13 20:11:12.105987 systemd[1]: Finished systemd-fsck-usr.service. Apr 13 20:11:12.106009 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 20:11:12.106025 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 13 20:11:12.106038 kernel: Bridge firewalling registered Apr 13 20:11:12.106053 systemd-journald[202]: Journal started Apr 13 20:11:12.106085 systemd-journald[202]: Runtime Journal (/run/log/journal/686405e340854a08a87a6ca89a1bfb79) is 4.7M, max 38.0M, 33.2M free. Apr 13 20:11:12.047410 systemd-modules-load[203]: Inserted module 'overlay' Apr 13 20:11:12.139762 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 20:11:12.082567 systemd-modules-load[203]: Inserted module 'br_netfilter' Apr 13 20:11:12.143385 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 20:11:12.144416 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:11:12.156696 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:11:12.170592 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 20:11:12.176343 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 20:11:12.183566 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 20:11:12.191316 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:11:12.193440 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:11:12.197220 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 13 20:11:12.204606 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 20:11:12.207890 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:11:12.213103 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 20:11:12.226792 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:11:12.230552 dracut-cmdline[232]: dracut-dracut-053 Apr 13 20:11:12.234239 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:11:12.270903 systemd-resolved[239]: Positive Trust Anchors: Apr 13 20:11:12.270921 systemd-resolved[239]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 20:11:12.270972 systemd-resolved[239]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 20:11:12.280059 systemd-resolved[239]: Defaulting to hostname 'linux'. Apr 13 20:11:12.282723 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 20:11:12.283907 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:11:12.334363 kernel: SCSI subsystem initialized Apr 13 20:11:12.345342 kernel: Loading iSCSI transport class v2.0-870. Apr 13 20:11:12.359413 kernel: iscsi: registered transport (tcp) Apr 13 20:11:12.385882 kernel: iscsi: registered transport (qla4xxx) Apr 13 20:11:12.385974 kernel: QLogic iSCSI HBA Driver Apr 13 20:11:12.443795 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 13 20:11:12.448569 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 13 20:11:12.488841 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 13 20:11:12.488921 kernel: device-mapper: uevent: version 1.0.3 Apr 13 20:11:12.491191 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 13 20:11:12.540378 kernel: raid6: sse2x4 gen() 13880 MB/s Apr 13 20:11:12.558372 kernel: raid6: sse2x2 gen() 9678 MB/s Apr 13 20:11:12.577024 kernel: raid6: sse2x1 gen() 10397 MB/s Apr 13 20:11:12.577096 kernel: raid6: using algorithm sse2x4 gen() 13880 MB/s Apr 13 20:11:12.595989 kernel: raid6: .... xor() 7724 MB/s, rmw enabled Apr 13 20:11:12.596101 kernel: raid6: using ssse3x2 recovery algorithm Apr 13 20:11:12.622381 kernel: xor: automatically using best checksumming function avx Apr 13 20:11:12.815415 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 13 20:11:12.829871 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 13 20:11:12.836585 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:11:12.862625 systemd-udevd[421]: Using default interface naming scheme 'v255'. Apr 13 20:11:12.869722 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:11:12.878532 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 13 20:11:12.900451 dracut-pre-trigger[427]: rd.md=0: removing MD RAID activation Apr 13 20:11:12.942667 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 20:11:12.948576 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 20:11:13.079242 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:11:13.087577 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 13 20:11:13.107175 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 13 20:11:13.115816 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 20:11:13.117462 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:11:13.119825 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 20:11:13.130635 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 13 20:11:13.159687 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 13 20:11:13.209382 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Apr 13 20:11:13.213578 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Apr 13 20:11:13.233387 kernel: cryptd: max_cpu_qlen set to 1000 Apr 13 20:11:13.244765 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 13 20:11:13.244843 kernel: GPT:17805311 != 125829119 Apr 13 20:11:13.244862 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 13 20:11:13.244880 kernel: GPT:17805311 != 125829119 Apr 13 20:11:13.244917 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 13 20:11:13.244935 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 13 20:11:13.254122 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 20:11:13.254314 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:11:13.256610 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:11:13.260044 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:11:13.260275 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:11:13.262986 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:11:13.272737 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:11:13.274844 kernel: ACPI: bus type USB registered Apr 13 20:11:13.287375 kernel: AVX version of gcm_enc/dec engaged. Apr 13 20:11:13.287436 kernel: usbcore: registered new interface driver usbfs Apr 13 20:11:13.287457 kernel: libata version 3.00 loaded. Apr 13 20:11:13.289342 kernel: usbcore: registered new interface driver hub Apr 13 20:11:13.291368 kernel: AES CTR mode by8 optimization enabled Apr 13 20:11:13.302376 kernel: usbcore: registered new device driver usb Apr 13 20:11:13.329674 kernel: ahci 0000:00:1f.2: version 3.0 Apr 13 20:11:13.329980 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 13 20:11:13.333340 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 13 20:11:13.333583 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 13 20:11:13.336357 kernel: scsi host0: ahci Apr 13 20:11:13.340357 kernel: scsi host1: ahci Apr 13 20:11:13.340601 kernel: scsi host2: ahci Apr 13 20:11:13.340812 kernel: scsi host3: ahci Apr 13 20:11:13.343020 kernel: scsi host4: ahci Apr 13 20:11:13.343264 kernel: scsi host5: ahci Apr 13 20:11:13.343526 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Apr 13 20:11:13.343547 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Apr 13 20:11:13.343564 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Apr 13 20:11:13.343590 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Apr 13 20:11:13.343609 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Apr 13 20:11:13.343626 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Apr 13 20:11:13.373344 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (482) Apr 13 20:11:13.386635 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 13 20:11:13.436446 kernel: BTRFS: device fsid de1edd48-4571-4695-92f0-7af6e33c4e3d devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (467) Apr 13 20:11:13.442688 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:11:13.455526 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 13 20:11:13.467664 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 13 20:11:13.473697 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 13 20:11:13.474540 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 13 20:11:13.482557 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 13 20:11:13.487509 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:11:13.496207 disk-uuid[564]: Primary Header is updated. Apr 13 20:11:13.496207 disk-uuid[564]: Secondary Entries is updated. Apr 13 20:11:13.496207 disk-uuid[564]: Secondary Header is updated. Apr 13 20:11:13.503395 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 13 20:11:13.513367 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 13 20:11:13.545875 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:11:13.654756 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 13 20:11:13.654858 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 13 20:11:13.654880 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 13 20:11:13.654899 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 13 20:11:13.656816 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 13 20:11:13.661372 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 13 20:11:13.697476 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Apr 13 20:11:13.703983 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Apr 13 20:11:13.704419 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 13 20:11:13.709274 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Apr 13 20:11:13.709585 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Apr 13 20:11:13.712396 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Apr 13 20:11:13.712768 kernel: hub 1-0:1.0: USB hub found Apr 13 20:11:13.717030 kernel: hub 1-0:1.0: 4 ports detected Apr 13 20:11:13.717365 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 13 20:11:13.720015 kernel: hub 2-0:1.0: USB hub found Apr 13 20:11:13.720355 kernel: hub 2-0:1.0: 4 ports detected Apr 13 20:11:13.957408 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 13 20:11:14.099350 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 13 20:11:14.105264 kernel: usbcore: registered new interface driver usbhid Apr 13 20:11:14.105307 kernel: usbhid: USB HID core driver Apr 13 20:11:14.113502 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Apr 13 20:11:14.113558 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Apr 13 20:11:14.512381 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 13 20:11:14.513382 disk-uuid[565]: The operation has completed successfully. Apr 13 20:11:14.567173 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 13 20:11:14.567368 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 13 20:11:14.591658 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 13 20:11:14.598242 sh[585]: Success Apr 13 20:11:14.614629 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Apr 13 20:11:14.684430 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 13 20:11:14.687458 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 13 20:11:14.693097 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 13 20:11:14.718638 kernel: BTRFS info (device dm-0): first mount of filesystem de1edd48-4571-4695-92f0-7af6e33c4e3d Apr 13 20:11:14.718707 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:11:14.718727 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 13 20:11:14.721494 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 13 20:11:14.724139 kernel: BTRFS info (device dm-0): using free space tree Apr 13 20:11:14.734362 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 13 20:11:14.735877 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 13 20:11:14.747691 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 13 20:11:14.751512 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 13 20:11:14.768808 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:11:14.768876 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:11:14.770943 kernel: BTRFS info (device vda6): using free space tree Apr 13 20:11:14.779411 kernel: BTRFS info (device vda6): auto enabling async discard Apr 13 20:11:14.793712 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 13 20:11:14.795534 kernel: BTRFS info (device vda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:11:14.805033 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 13 20:11:14.812630 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 13 20:11:14.909668 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 20:11:14.922716 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 20:11:14.962078 systemd-networkd[769]: lo: Link UP Apr 13 20:11:14.963175 systemd-networkd[769]: lo: Gained carrier Apr 13 20:11:14.966808 systemd-networkd[769]: Enumeration completed Apr 13 20:11:14.966991 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 20:11:14.968188 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:11:14.968194 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 20:11:14.971343 systemd[1]: Reached target network.target - Network. Apr 13 20:11:14.973602 systemd-networkd[769]: eth0: Link UP Apr 13 20:11:14.973608 systemd-networkd[769]: eth0: Gained carrier Apr 13 20:11:14.973625 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:11:14.985287 ignition[678]: Ignition 2.19.0 Apr 13 20:11:14.985307 ignition[678]: Stage: fetch-offline Apr 13 20:11:14.988891 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 20:11:14.985503 ignition[678]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:11:14.985529 ignition[678]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Apr 13 20:11:14.985724 ignition[678]: parsed url from cmdline: "" Apr 13 20:11:14.985730 ignition[678]: no config URL provided Apr 13 20:11:14.985740 ignition[678]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 20:11:14.985756 ignition[678]: no config at "/usr/lib/ignition/user.ign" Apr 13 20:11:14.985765 ignition[678]: failed to fetch config: resource requires networking Apr 13 20:11:14.986257 ignition[678]: Ignition finished successfully Apr 13 20:11:15.012708 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 13 20:11:15.034213 ignition[777]: Ignition 2.19.0 Apr 13 20:11:15.035452 ignition[777]: Stage: fetch Apr 13 20:11:15.035778 ignition[777]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:11:15.035844 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Apr 13 20:11:15.035997 ignition[777]: parsed url from cmdline: "" Apr 13 20:11:15.036004 ignition[777]: no config URL provided Apr 13 20:11:15.036014 ignition[777]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 20:11:15.036032 ignition[777]: no config at "/usr/lib/ignition/user.ign" Apr 13 20:11:15.036197 ignition[777]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Apr 13 20:11:15.036259 ignition[777]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Apr 13 20:11:15.036536 ignition[777]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Apr 13 20:11:15.036915 ignition[777]: GET error: Get "http://169.254.169.254/openstack/latest/user_data": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 13 20:11:15.041456 systemd-networkd[769]: eth0: DHCPv4 address 10.244.14.202/30, gateway 10.244.14.201 acquired from 10.244.14.201 Apr 13 20:11:15.237553 ignition[777]: GET http://169.254.169.254/openstack/latest/user_data: attempt #2 Apr 13 20:11:15.258623 ignition[777]: GET result: OK Apr 13 20:11:15.259416 ignition[777]: parsing config with SHA512: 384d3483642870683e319d928323222eecbca82953a934b3a60ace6352fdf40fcdcfeeb1f390f23d06e01d053400f8f23026f5d18f5147e829ca0f6f598c3a16 Apr 13 20:11:15.267889 unknown[777]: fetched base config from "system" Apr 13 20:11:15.267913 unknown[777]: fetched base config from "system" Apr 13 20:11:15.268550 ignition[777]: fetch: fetch complete Apr 13 20:11:15.267925 unknown[777]: fetched user config from "openstack" Apr 13 20:11:15.268559 ignition[777]: fetch: fetch passed Apr 13 20:11:15.271057 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 13 20:11:15.268632 ignition[777]: Ignition finished successfully Apr 13 20:11:15.290567 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 13 20:11:15.308950 ignition[784]: Ignition 2.19.0 Apr 13 20:11:15.308969 ignition[784]: Stage: kargs Apr 13 20:11:15.309215 ignition[784]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:11:15.309235 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Apr 13 20:11:15.313858 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 13 20:11:15.312173 ignition[784]: kargs: kargs passed Apr 13 20:11:15.312263 ignition[784]: Ignition finished successfully Apr 13 20:11:15.326780 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 13 20:11:15.346695 ignition[791]: Ignition 2.19.0 Apr 13 20:11:15.346716 ignition[791]: Stage: disks Apr 13 20:11:15.346969 ignition[791]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:11:15.346991 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Apr 13 20:11:15.349811 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 13 20:11:15.348558 ignition[791]: disks: disks passed Apr 13 20:11:15.351631 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 13 20:11:15.348632 ignition[791]: Ignition finished successfully Apr 13 20:11:15.352650 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 20:11:15.354116 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 20:11:15.355411 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 20:11:15.356986 systemd[1]: Reached target basic.target - Basic System. Apr 13 20:11:15.366637 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 13 20:11:15.388308 systemd-fsck[800]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 13 20:11:15.393749 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 13 20:11:15.401451 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 13 20:11:15.532350 kernel: EXT4-fs (vda9): mounted filesystem e02793bf-3e0d-4c7e-b11a-92c664da7ce3 r/w with ordered data mode. Quota mode: none. Apr 13 20:11:15.533395 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 13 20:11:15.534790 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 13 20:11:15.543506 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 20:11:15.547455 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 13 20:11:15.548618 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 13 20:11:15.550582 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Apr 13 20:11:15.552413 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 13 20:11:15.552463 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 20:11:15.560479 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 13 20:11:15.567739 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 13 20:11:15.573277 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (808) Apr 13 20:11:15.574145 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:11:15.574178 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:11:15.577358 kernel: BTRFS info (device vda6): using free space tree Apr 13 20:11:15.595704 kernel: BTRFS info (device vda6): auto enabling async discard Apr 13 20:11:15.601165 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 20:11:15.684165 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Apr 13 20:11:15.693347 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Apr 13 20:11:15.702136 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Apr 13 20:11:15.718812 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Apr 13 20:11:15.831965 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 13 20:11:15.846510 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 13 20:11:15.850509 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 13 20:11:15.863356 kernel: BTRFS info (device vda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:11:15.863603 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 13 20:11:15.894931 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 13 20:11:15.896875 ignition[924]: INFO : Ignition 2.19.0 Apr 13 20:11:15.896875 ignition[924]: INFO : Stage: mount Apr 13 20:11:15.896875 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:11:15.896875 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Apr 13 20:11:15.902438 ignition[924]: INFO : mount: mount passed Apr 13 20:11:15.903183 ignition[924]: INFO : Ignition finished successfully Apr 13 20:11:15.903752 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 13 20:11:16.705992 systemd-networkd[769]: eth0: Gained IPv6LL Apr 13 20:11:18.216245 systemd-networkd[769]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:3b2:24:19ff:fef4:eca/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:3b2:24:19ff:fef4:eca/64 assigned by NDisc. Apr 13 20:11:18.216283 systemd-networkd[769]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Apr 13 20:11:22.745469 coreos-metadata[810]: Apr 13 20:11:22.745 WARN failed to locate config-drive, using the metadata service API instead Apr 13 20:11:22.770057 coreos-metadata[810]: Apr 13 20:11:22.769 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Apr 13 20:11:22.787139 coreos-metadata[810]: Apr 13 20:11:22.786 INFO Fetch successful Apr 13 20:11:22.788093 coreos-metadata[810]: Apr 13 20:11:22.787 INFO wrote hostname srv-pcqx3.gb1.brightbox.com to /sysroot/etc/hostname Apr 13 20:11:22.790402 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Apr 13 20:11:22.790568 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Apr 13 20:11:22.799480 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 13 20:11:22.821629 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 20:11:22.833355 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (941) Apr 13 20:11:22.839027 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:11:22.839074 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:11:22.840587 kernel: BTRFS info (device vda6): using free space tree Apr 13 20:11:22.847363 kernel: BTRFS info (device vda6): auto enabling async discard Apr 13 20:11:22.849132 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 20:11:22.875772 ignition[958]: INFO : Ignition 2.19.0 Apr 13 20:11:22.875772 ignition[958]: INFO : Stage: files Apr 13 20:11:22.877592 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:11:22.877592 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Apr 13 20:11:22.880206 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Apr 13 20:11:22.880206 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 13 20:11:22.880206 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 13 20:11:22.883594 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 13 20:11:22.883594 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 13 20:11:22.885651 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 13 20:11:22.883772 unknown[958]: wrote ssh authorized keys file for user: core Apr 13 20:11:22.887782 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 20:11:22.887782 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 13 20:11:23.057039 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 13 20:11:23.363891 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 20:11:23.363891 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 13 20:11:23.366506 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 13 20:11:23.656015 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 13 20:11:23.990955 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 13 20:11:23.990955 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 13 20:11:23.993845 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 13 20:11:23.993845 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 13 20:11:23.993845 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 13 20:11:23.993845 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 20:11:23.993845 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 20:11:23.993845 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 20:11:23.993845 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 20:11:23.993845 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 20:11:23.993845 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 20:11:24.006460 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 13 20:11:24.006460 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 13 20:11:24.006460 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 13 20:11:24.006460 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 13 20:11:24.406054 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 13 20:11:26.868254 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 13 20:11:26.868254 ignition[958]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 13 20:11:26.875298 ignition[958]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 20:11:26.875298 ignition[958]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 20:11:26.875298 ignition[958]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 13 20:11:26.875298 ignition[958]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Apr 13 20:11:26.875298 ignition[958]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Apr 13 20:11:26.875298 ignition[958]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 13 20:11:26.875298 ignition[958]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 13 20:11:26.875298 ignition[958]: INFO : files: files passed Apr 13 20:11:26.875298 ignition[958]: INFO : Ignition finished successfully Apr 13 20:11:26.875887 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 13 20:11:26.888670 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 13 20:11:26.893609 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 13 20:11:26.917884 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 13 20:11:26.919098 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 13 20:11:26.930470 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:11:26.930470 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:11:26.933957 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:11:26.937016 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 20:11:26.939095 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 13 20:11:26.954904 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 13 20:11:27.008120 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 13 20:11:27.008332 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 13 20:11:27.012801 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 13 20:11:27.013720 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 13 20:11:27.016367 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 13 20:11:27.023907 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 13 20:11:27.050881 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 20:11:27.062737 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 13 20:11:27.085125 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:11:27.086159 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:11:27.087766 systemd[1]: Stopped target timers.target - Timer Units. Apr 13 20:11:27.089312 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 13 20:11:27.089539 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 20:11:27.091836 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 13 20:11:27.092778 systemd[1]: Stopped target basic.target - Basic System. Apr 13 20:11:27.094465 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 13 20:11:27.095283 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 20:11:27.097218 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 13 20:11:27.098044 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 13 20:11:27.098853 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 20:11:27.100540 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 13 20:11:27.101896 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 13 20:11:27.103573 systemd[1]: Stopped target swap.target - Swaps. Apr 13 20:11:27.105260 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 13 20:11:27.105519 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 13 20:11:27.107677 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:11:27.108715 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:11:27.110649 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 13 20:11:27.110858 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:11:27.112312 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 13 20:11:27.112663 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 13 20:11:27.114130 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 13 20:11:27.114310 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 20:11:27.115222 systemd[1]: ignition-files.service: Deactivated successfully. Apr 13 20:11:27.115542 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 13 20:11:27.123801 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 13 20:11:27.126674 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 13 20:11:27.128008 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 13 20:11:27.128300 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:11:27.135425 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 13 20:11:27.135741 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 20:11:27.144692 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 13 20:11:27.144853 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 13 20:11:27.170923 ignition[1012]: INFO : Ignition 2.19.0 Apr 13 20:11:27.170923 ignition[1012]: INFO : Stage: umount Apr 13 20:11:27.172940 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:11:27.172940 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Apr 13 20:11:27.176072 ignition[1012]: INFO : umount: umount passed Apr 13 20:11:27.177691 ignition[1012]: INFO : Ignition finished successfully Apr 13 20:11:27.180928 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 13 20:11:27.182212 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 13 20:11:27.185655 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 13 20:11:27.186294 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 13 20:11:27.186430 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 13 20:11:27.187212 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 13 20:11:27.187286 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 13 20:11:27.189547 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 13 20:11:27.189628 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 13 20:11:27.190541 systemd[1]: Stopped target network.target - Network. Apr 13 20:11:27.191883 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 13 20:11:27.191964 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 20:11:27.194766 systemd[1]: Stopped target paths.target - Path Units. Apr 13 20:11:27.196093 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 13 20:11:27.199414 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:11:27.200353 systemd[1]: Stopped target slices.target - Slice Units. Apr 13 20:11:27.200987 systemd[1]: Stopped target sockets.target - Socket Units. Apr 13 20:11:27.201808 systemd[1]: iscsid.socket: Deactivated successfully. Apr 13 20:11:27.201896 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 20:11:27.209578 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 13 20:11:27.209691 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 20:11:27.210493 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 13 20:11:27.210583 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 13 20:11:27.211427 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 13 20:11:27.211508 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 13 20:11:27.212587 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 13 20:11:27.213695 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 13 20:11:27.214955 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 13 20:11:27.215150 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 13 20:11:27.216563 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 13 20:11:27.216729 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 13 20:11:27.223748 systemd-networkd[769]: eth0: DHCPv6 lease lost Apr 13 20:11:27.228936 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 13 20:11:27.229166 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 13 20:11:27.232616 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 13 20:11:27.232796 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:11:27.241711 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 13 20:11:27.256980 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 13 20:11:27.260402 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 20:11:27.268141 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:11:27.272222 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 13 20:11:27.272497 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 13 20:11:27.289684 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 13 20:11:27.290921 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:11:27.294816 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 13 20:11:27.295010 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 13 20:11:27.299314 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 13 20:11:27.299446 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 13 20:11:27.300283 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 13 20:11:27.300367 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:11:27.301089 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 13 20:11:27.301184 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 13 20:11:27.309000 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 13 20:11:27.309175 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 13 20:11:27.310950 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 20:11:27.311075 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:11:27.320731 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 13 20:11:27.321623 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 20:11:27.321741 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:11:27.322904 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 13 20:11:27.322982 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 13 20:11:27.330601 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 13 20:11:27.330708 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:11:27.333553 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 13 20:11:27.333633 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 20:11:27.337515 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 13 20:11:27.337601 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:11:27.339161 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 13 20:11:27.339236 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:11:27.341179 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:11:27.341258 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:11:27.342690 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 13 20:11:27.342878 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 13 20:11:27.344700 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 13 20:11:27.351655 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 13 20:11:27.370877 systemd[1]: Switching root. Apr 13 20:11:27.410891 systemd-journald[202]: Journal stopped Apr 13 20:11:29.075942 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Apr 13 20:11:29.076058 kernel: SELinux: policy capability network_peer_controls=1 Apr 13 20:11:29.076097 kernel: SELinux: policy capability open_perms=1 Apr 13 20:11:29.076116 kernel: SELinux: policy capability extended_socket_class=1 Apr 13 20:11:29.076135 kernel: SELinux: policy capability always_check_network=0 Apr 13 20:11:29.076171 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 13 20:11:29.076192 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 13 20:11:29.076217 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 13 20:11:29.076244 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 13 20:11:29.076265 systemd[1]: Successfully loaded SELinux policy in 57.344ms. Apr 13 20:11:29.076313 kernel: audit: type=1403 audit(1776111087.667:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 13 20:11:29.078370 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.897ms. Apr 13 20:11:29.078397 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 20:11:29.078438 systemd[1]: Detected virtualization kvm. Apr 13 20:11:29.078461 systemd[1]: Detected architecture x86-64. Apr 13 20:11:29.078488 systemd[1]: Detected first boot. Apr 13 20:11:29.078510 systemd[1]: Hostname set to . Apr 13 20:11:29.078537 systemd[1]: Initializing machine ID from VM UUID. Apr 13 20:11:29.078558 zram_generator::config[1054]: No configuration found. Apr 13 20:11:29.078631 systemd[1]: Populated /etc with preset unit settings. Apr 13 20:11:29.078656 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 13 20:11:29.078693 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 13 20:11:29.078722 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 13 20:11:29.078745 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 13 20:11:29.078765 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 13 20:11:29.078793 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 13 20:11:29.078813 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 13 20:11:29.078841 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 13 20:11:29.078863 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 13 20:11:29.078896 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 13 20:11:29.078924 systemd[1]: Created slice user.slice - User and Session Slice. Apr 13 20:11:29.078946 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:11:29.078966 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:11:29.079015 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 13 20:11:29.079047 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 13 20:11:29.079082 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 13 20:11:29.079105 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 20:11:29.079125 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 13 20:11:29.079192 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:11:29.079217 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 13 20:11:29.079237 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 13 20:11:29.079258 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 13 20:11:29.079278 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 13 20:11:29.079298 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:11:29.079357 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 20:11:29.079382 systemd[1]: Reached target slices.target - Slice Units. Apr 13 20:11:29.079415 systemd[1]: Reached target swap.target - Swaps. Apr 13 20:11:29.079458 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 13 20:11:29.079480 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 13 20:11:29.079501 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:11:29.079521 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 20:11:29.079554 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:11:29.079576 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 13 20:11:29.079595 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 13 20:11:29.079616 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 13 20:11:29.079636 systemd[1]: Mounting media.mount - External Media Directory... Apr 13 20:11:29.079657 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:11:29.079677 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 13 20:11:29.079697 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 13 20:11:29.079731 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 13 20:11:29.079753 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 13 20:11:29.079774 systemd[1]: Reached target machines.target - Containers. Apr 13 20:11:29.079802 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 13 20:11:29.079823 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 20:11:29.079844 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 20:11:29.079864 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 13 20:11:29.079884 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 20:11:29.079904 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 20:11:29.079938 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 20:11:29.079960 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 13 20:11:29.079980 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 20:11:29.080001 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 13 20:11:29.080021 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 13 20:11:29.080041 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 13 20:11:29.080061 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 13 20:11:29.080235 systemd[1]: Stopped systemd-fsck-usr.service. Apr 13 20:11:29.080274 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 20:11:29.080297 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 20:11:29.080317 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 13 20:11:29.081411 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 13 20:11:29.081433 kernel: loop: module loaded Apr 13 20:11:29.081454 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 20:11:29.081474 systemd[1]: verity-setup.service: Deactivated successfully. Apr 13 20:11:29.081504 systemd[1]: Stopped verity-setup.service. Apr 13 20:11:29.081526 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:11:29.081562 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 13 20:11:29.081585 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 13 20:11:29.081611 kernel: fuse: init (API version 7.39) Apr 13 20:11:29.081632 systemd[1]: Mounted media.mount - External Media Directory. Apr 13 20:11:29.081653 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 13 20:11:29.081686 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 13 20:11:29.081707 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 13 20:11:29.081728 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:11:29.081748 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 13 20:11:29.081776 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 13 20:11:29.081798 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 20:11:29.081819 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 20:11:29.081852 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 20:11:29.081876 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 20:11:29.081952 systemd-journald[1140]: Collecting audit messages is disabled. Apr 13 20:11:29.081993 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 13 20:11:29.082015 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 13 20:11:29.082051 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 20:11:29.082084 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 20:11:29.082108 systemd-journald[1140]: Journal started Apr 13 20:11:29.082142 systemd-journald[1140]: Runtime Journal (/run/log/journal/686405e340854a08a87a6ca89a1bfb79) is 4.7M, max 38.0M, 33.2M free. Apr 13 20:11:28.536960 systemd[1]: Queued start job for default target multi-user.target. Apr 13 20:11:28.560212 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 13 20:11:28.561090 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 13 20:11:29.086410 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 20:11:29.090837 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 20:11:29.092316 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 13 20:11:29.093759 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 13 20:11:29.094366 kernel: ACPI: bus type drm_connector registered Apr 13 20:11:29.097128 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 20:11:29.098464 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 20:11:29.104825 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 13 20:11:29.119763 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 13 20:11:29.128463 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 13 20:11:29.142473 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 13 20:11:29.145447 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 13 20:11:29.145517 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 20:11:29.149016 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 13 20:11:29.159569 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 13 20:11:29.162612 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 13 20:11:29.163698 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 20:11:29.169599 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 13 20:11:29.177649 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 13 20:11:29.178800 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 20:11:29.185635 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 13 20:11:29.186682 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 20:11:29.198941 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 20:11:29.203487 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 13 20:11:29.210800 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 20:11:29.218098 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 13 20:11:29.221404 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 13 20:11:29.222727 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 13 20:11:29.246988 systemd-journald[1140]: Time spent on flushing to /var/log/journal/686405e340854a08a87a6ca89a1bfb79 is 141.843ms for 1144 entries. Apr 13 20:11:29.246988 systemd-journald[1140]: System Journal (/var/log/journal/686405e340854a08a87a6ca89a1bfb79) is 8.0M, max 584.8M, 576.8M free. Apr 13 20:11:29.438481 systemd-journald[1140]: Received client request to flush runtime journal. Apr 13 20:11:29.438550 kernel: loop0: detected capacity change from 0 to 140768 Apr 13 20:11:29.438578 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 13 20:11:29.269951 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 13 20:11:29.271784 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 13 20:11:29.281645 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 13 20:11:29.379315 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:11:29.421163 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:11:29.437595 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 13 20:11:29.440108 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 13 20:11:29.441762 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Apr 13 20:11:29.441782 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Apr 13 20:11:29.444396 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 13 20:11:29.454123 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 13 20:11:29.457035 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 20:11:29.479674 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 13 20:11:29.485564 udevadm[1201]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 13 20:11:29.500590 kernel: loop1: detected capacity change from 0 to 142488 Apr 13 20:11:29.559192 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 13 20:11:29.566352 kernel: loop2: detected capacity change from 0 to 8 Apr 13 20:11:29.573549 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 20:11:29.593371 kernel: loop3: detected capacity change from 0 to 228704 Apr 13 20:11:29.611578 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Apr 13 20:11:29.612147 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Apr 13 20:11:29.620169 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:11:29.669392 kernel: loop4: detected capacity change from 0 to 140768 Apr 13 20:11:29.728373 kernel: loop5: detected capacity change from 0 to 142488 Apr 13 20:11:29.785362 kernel: loop6: detected capacity change from 0 to 8 Apr 13 20:11:29.795360 kernel: loop7: detected capacity change from 0 to 228704 Apr 13 20:11:29.829221 (sd-merge)[1216]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Apr 13 20:11:29.830623 (sd-merge)[1216]: Merged extensions into '/usr'. Apr 13 20:11:29.841355 systemd[1]: Reloading requested from client PID 1187 ('systemd-sysext') (unit systemd-sysext.service)... Apr 13 20:11:29.841384 systemd[1]: Reloading... Apr 13 20:11:29.962399 zram_generator::config[1239]: No configuration found. Apr 13 20:11:30.260420 ldconfig[1182]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 13 20:11:30.304927 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:11:30.374485 systemd[1]: Reloading finished in 532 ms. Apr 13 20:11:30.420533 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 13 20:11:30.421956 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 13 20:11:30.423203 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 13 20:11:30.438630 systemd[1]: Starting ensure-sysext.service... Apr 13 20:11:30.444633 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 20:11:30.453963 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:11:30.459509 systemd[1]: Reloading requested from client PID 1299 ('systemctl') (unit ensure-sysext.service)... Apr 13 20:11:30.459536 systemd[1]: Reloading... Apr 13 20:11:30.500410 systemd-udevd[1301]: Using default interface naming scheme 'v255'. Apr 13 20:11:30.510944 systemd-tmpfiles[1300]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 13 20:11:30.514238 systemd-tmpfiles[1300]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 13 20:11:30.519471 systemd-tmpfiles[1300]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 13 20:11:30.520503 systemd-tmpfiles[1300]: ACLs are not supported, ignoring. Apr 13 20:11:30.520636 systemd-tmpfiles[1300]: ACLs are not supported, ignoring. Apr 13 20:11:30.529881 systemd-tmpfiles[1300]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 20:11:30.529900 systemd-tmpfiles[1300]: Skipping /boot Apr 13 20:11:30.550217 systemd-tmpfiles[1300]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 20:11:30.550238 systemd-tmpfiles[1300]: Skipping /boot Apr 13 20:11:30.615395 zram_generator::config[1326]: No configuration found. Apr 13 20:11:30.783661 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1333) Apr 13 20:11:30.895066 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:11:30.940758 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 13 20:11:30.965380 kernel: ACPI: button: Power Button [PWRF] Apr 13 20:11:31.002474 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 13 20:11:31.004274 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 13 20:11:31.008621 kernel: mousedev: PS/2 mouse device common for all mice Apr 13 20:11:31.008199 systemd[1]: Reloading finished in 548 ms. Apr 13 20:11:31.024351 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 13 20:11:31.032167 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 13 20:11:31.052552 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 13 20:11:31.033207 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:11:31.036901 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:11:31.103554 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 20:11:31.114444 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Apr 13 20:11:31.119281 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 13 20:11:31.126810 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 13 20:11:31.130737 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 13 20:11:31.144439 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 20:11:31.155886 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 20:11:31.166444 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 13 20:11:31.174061 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:11:31.174387 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 20:11:31.184788 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 20:11:31.192606 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 20:11:31.203748 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 20:11:31.204760 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 20:11:31.204932 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:11:31.213879 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:11:31.214190 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 20:11:31.214495 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 20:11:31.214646 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:11:31.243745 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 13 20:11:31.247163 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 13 20:11:31.281720 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:11:31.282154 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 20:11:31.288798 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 20:11:31.291840 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 20:11:31.292088 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 13 20:11:31.292248 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:11:31.296536 systemd[1]: Finished ensure-sysext.service. Apr 13 20:11:31.299305 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 20:11:31.300581 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 20:11:31.327566 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 13 20:11:31.338588 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:11:31.339931 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 20:11:31.340226 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 20:11:31.344795 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 13 20:11:31.350102 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 20:11:31.350438 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 20:11:31.355736 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 20:11:31.361006 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 13 20:11:31.375913 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 20:11:31.376266 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 20:11:31.377607 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 20:11:31.420281 augenrules[1448]: No rules Apr 13 20:11:31.422050 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 20:11:31.424422 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 13 20:11:31.444245 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 13 20:11:31.458035 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 13 20:11:31.489593 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 13 20:11:31.646976 systemd-networkd[1414]: lo: Link UP Apr 13 20:11:31.646990 systemd-networkd[1414]: lo: Gained carrier Apr 13 20:11:31.655703 systemd-networkd[1414]: Enumeration completed Apr 13 20:11:31.655947 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 20:11:31.661102 systemd-networkd[1414]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:11:31.661116 systemd-networkd[1414]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 20:11:31.672537 systemd-networkd[1414]: eth0: Link UP Apr 13 20:11:31.672554 systemd-networkd[1414]: eth0: Gained carrier Apr 13 20:11:31.672589 systemd-networkd[1414]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:11:31.718064 systemd-resolved[1420]: Positive Trust Anchors: Apr 13 20:11:31.718085 systemd-resolved[1420]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 20:11:31.718131 systemd-resolved[1420]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 20:11:31.727165 systemd-resolved[1420]: Using system hostname 'srv-pcqx3.gb1.brightbox.com'. Apr 13 20:11:31.746513 systemd-networkd[1414]: eth0: DHCPv4 address 10.244.14.202/30, gateway 10.244.14.201 acquired from 10.244.14.201 Apr 13 20:11:31.749596 systemd-timesyncd[1433]: Network configuration changed, trying to establish connection. Apr 13 20:11:31.761849 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 13 20:11:31.765068 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 20:11:31.768362 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 13 20:11:31.770175 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:11:31.774194 systemd[1]: Reached target network.target - Network. Apr 13 20:11:31.774987 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:11:31.776404 systemd[1]: Reached target time-set.target - System Time Set. Apr 13 20:11:31.787781 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 13 20:11:31.791762 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 13 20:11:31.810416 lvm[1470]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 20:11:31.848113 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 13 20:11:31.849476 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:11:31.850299 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 20:11:31.851304 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 13 20:11:31.852507 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 13 20:11:31.853786 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 13 20:11:31.854804 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 13 20:11:31.855727 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 13 20:11:31.856641 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 13 20:11:31.856706 systemd[1]: Reached target paths.target - Path Units. Apr 13 20:11:31.857510 systemd[1]: Reached target timers.target - Timer Units. Apr 13 20:11:31.859277 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 13 20:11:31.863068 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 13 20:11:31.874395 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 13 20:11:31.877653 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 13 20:11:31.879405 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 13 20:11:31.880288 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 20:11:31.886944 systemd[1]: Reached target basic.target - Basic System. Apr 13 20:11:31.887858 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 13 20:11:31.887913 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 13 20:11:31.900870 systemd[1]: Starting containerd.service - containerd container runtime... Apr 13 20:11:31.906653 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 13 20:11:31.913600 lvm[1475]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 20:11:31.921662 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 13 20:11:31.927056 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 13 20:11:31.936751 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 13 20:11:31.942983 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 13 20:11:31.948563 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 13 20:11:31.958954 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 13 20:11:31.973866 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 13 20:11:31.979656 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 13 20:11:31.998711 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 13 20:11:32.000556 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 13 20:11:32.003542 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 13 20:11:32.006227 systemd[1]: Starting update-engine.service - Update Engine... Apr 13 20:11:32.016480 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 13 20:11:32.042315 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 13 20:11:32.046948 dbus-daemon[1478]: [system] SELinux support is enabled Apr 13 20:11:32.047203 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 13 20:11:33.266779 systemd-timesyncd[1433]: Contacted time server 185.57.191.230:123 (0.flatcar.pool.ntp.org). Apr 13 20:11:33.266856 systemd-timesyncd[1433]: Initial clock synchronization to Mon 2026-04-13 20:11:33.266576 UTC. Apr 13 20:11:33.267694 systemd-resolved[1420]: Clock change detected. Flushing caches. Apr 13 20:11:33.273144 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 13 20:11:33.273544 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 13 20:11:33.282627 dbus-daemon[1478]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.3' (uid=244 pid=1414 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 13 20:11:33.275657 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 13 20:11:33.275693 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 13 20:11:33.298805 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 13 20:11:33.330803 jq[1479]: false Apr 13 20:11:33.321558 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 13 20:11:33.331126 jq[1490]: true Apr 13 20:11:33.321915 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 13 20:11:33.362377 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 13 20:11:33.362755 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 13 20:11:33.370548 update_engine[1488]: I20260413 20:11:33.370343 1488 main.cc:92] Flatcar Update Engine starting Apr 13 20:11:33.380061 update_engine[1488]: I20260413 20:11:33.379814 1488 update_check_scheduler.cc:74] Next update check in 5m33s Apr 13 20:11:33.380632 systemd[1]: Started update-engine.service - Update Engine. Apr 13 20:11:33.385180 (ntainerd)[1505]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 13 20:11:33.391804 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 13 20:11:33.412369 tar[1500]: linux-amd64/LICENSE Apr 13 20:11:33.413106 tar[1500]: linux-amd64/helm Apr 13 20:11:33.417682 jq[1501]: true Apr 13 20:11:33.425333 systemd[1]: motdgen.service: Deactivated successfully. Apr 13 20:11:33.425698 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 13 20:11:33.452458 extend-filesystems[1480]: Found loop4 Apr 13 20:11:33.452458 extend-filesystems[1480]: Found loop5 Apr 13 20:11:33.452458 extend-filesystems[1480]: Found loop6 Apr 13 20:11:33.452458 extend-filesystems[1480]: Found loop7 Apr 13 20:11:33.452458 extend-filesystems[1480]: Found vda Apr 13 20:11:33.452458 extend-filesystems[1480]: Found vda1 Apr 13 20:11:33.452458 extend-filesystems[1480]: Found vda2 Apr 13 20:11:33.452458 extend-filesystems[1480]: Found vda3 Apr 13 20:11:33.474617 extend-filesystems[1480]: Found usr Apr 13 20:11:33.474617 extend-filesystems[1480]: Found vda4 Apr 13 20:11:33.474617 extend-filesystems[1480]: Found vda6 Apr 13 20:11:33.474617 extend-filesystems[1480]: Found vda7 Apr 13 20:11:33.474617 extend-filesystems[1480]: Found vda9 Apr 13 20:11:33.474617 extend-filesystems[1480]: Checking size of /dev/vda9 Apr 13 20:11:33.544408 extend-filesystems[1480]: Resized partition /dev/vda9 Apr 13 20:11:33.547711 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Apr 13 20:11:33.547802 extend-filesystems[1526]: resize2fs 1.47.1 (20-May-2024) Apr 13 20:11:33.589826 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1333) Apr 13 20:11:33.620087 locksmithd[1512]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 13 20:11:33.706871 systemd-logind[1487]: Watching system buttons on /dev/input/event2 (Power Button) Apr 13 20:11:33.712292 dbus-daemon[1478]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 13 20:11:33.706928 systemd-logind[1487]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 13 20:11:33.713148 dbus-daemon[1478]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1496 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 13 20:11:33.707557 systemd-logind[1487]: New seat seat0. Apr 13 20:11:33.712578 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 13 20:11:33.715207 systemd[1]: Started systemd-logind.service - User Login Management. Apr 13 20:11:33.726014 systemd[1]: Starting polkit.service - Authorization Manager... Apr 13 20:11:33.758411 polkitd[1542]: Started polkitd version 121 Apr 13 20:11:33.776652 polkitd[1542]: Loading rules from directory /etc/polkit-1/rules.d Apr 13 20:11:33.777856 polkitd[1542]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 13 20:11:33.781212 polkitd[1542]: Finished loading, compiling and executing 2 rules Apr 13 20:11:33.783194 dbus-daemon[1478]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 13 20:11:33.783441 systemd[1]: Started polkit.service - Authorization Manager. Apr 13 20:11:33.787849 polkitd[1542]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 13 20:11:33.794884 bash[1541]: Updated "/home/core/.ssh/authorized_keys" Apr 13 20:11:33.802602 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 13 20:11:33.818000 systemd[1]: Starting sshkeys.service... Apr 13 20:11:33.844222 systemd-hostnamed[1496]: Hostname set to (static) Apr 13 20:11:33.868258 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 13 20:11:33.880102 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 13 20:11:33.949799 sshd_keygen[1494]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 13 20:11:33.999825 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Apr 13 20:11:34.017742 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 13 20:11:34.031166 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 13 20:11:34.034629 containerd[1505]: time="2026-04-13T20:11:34.034428603Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 13 20:11:34.041935 extend-filesystems[1526]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 13 20:11:34.041935 extend-filesystems[1526]: old_desc_blocks = 1, new_desc_blocks = 8 Apr 13 20:11:34.041935 extend-filesystems[1526]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Apr 13 20:11:34.053124 extend-filesystems[1480]: Resized filesystem in /dev/vda9 Apr 13 20:11:34.043599 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 13 20:11:34.043901 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 13 20:11:34.064165 systemd[1]: issuegen.service: Deactivated successfully. Apr 13 20:11:34.064514 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 13 20:11:34.081164 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 13 20:11:34.109944 containerd[1505]: time="2026-04-13T20:11:34.109865407Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:11:34.112327 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 13 20:11:34.116648 containerd[1505]: time="2026-04-13T20:11:34.115676603Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:11:34.116648 containerd[1505]: time="2026-04-13T20:11:34.115721828Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 13 20:11:34.116648 containerd[1505]: time="2026-04-13T20:11:34.115748189Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 13 20:11:34.116648 containerd[1505]: time="2026-04-13T20:11:34.116230487Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 13 20:11:34.116648 containerd[1505]: time="2026-04-13T20:11:34.116269160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 13 20:11:34.116648 containerd[1505]: time="2026-04-13T20:11:34.116381726Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:11:34.116648 containerd[1505]: time="2026-04-13T20:11:34.116407728Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:11:34.118418 containerd[1505]: time="2026-04-13T20:11:34.118252027Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:11:34.118963 containerd[1505]: time="2026-04-13T20:11:34.118934091Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 13 20:11:34.119408 containerd[1505]: time="2026-04-13T20:11:34.119161338Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:11:34.119408 containerd[1505]: time="2026-04-13T20:11:34.119191487Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 13 20:11:34.121886 containerd[1505]: time="2026-04-13T20:11:34.120981308Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:11:34.121886 containerd[1505]: time="2026-04-13T20:11:34.121723350Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:11:34.122164 containerd[1505]: time="2026-04-13T20:11:34.122094758Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:11:34.122321 containerd[1505]: time="2026-04-13T20:11:34.122294045Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 13 20:11:34.123159 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 13 20:11:34.124244 containerd[1505]: time="2026-04-13T20:11:34.123571981Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 13 20:11:34.124244 containerd[1505]: time="2026-04-13T20:11:34.124077111Z" level=info msg="metadata content store policy set" policy=shared Apr 13 20:11:34.127443 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 13 20:11:34.129429 systemd[1]: Reached target getty.target - Login Prompts. Apr 13 20:11:34.137376 containerd[1505]: time="2026-04-13T20:11:34.137315016Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 13 20:11:34.138525 containerd[1505]: time="2026-04-13T20:11:34.137660280Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 13 20:11:34.138525 containerd[1505]: time="2026-04-13T20:11:34.137710578Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 13 20:11:34.138525 containerd[1505]: time="2026-04-13T20:11:34.137742139Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 13 20:11:34.138525 containerd[1505]: time="2026-04-13T20:11:34.137777114Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 13 20:11:34.138525 containerd[1505]: time="2026-04-13T20:11:34.138069502Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 13 20:11:34.138525 containerd[1505]: time="2026-04-13T20:11:34.138399068Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 13 20:11:34.139315 containerd[1505]: time="2026-04-13T20:11:34.139283389Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 13 20:11:34.139421 containerd[1505]: time="2026-04-13T20:11:34.139397165Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 13 20:11:34.139621 containerd[1505]: time="2026-04-13T20:11:34.139524797Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 13 20:11:34.139722 containerd[1505]: time="2026-04-13T20:11:34.139698426Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 13 20:11:34.139932 containerd[1505]: time="2026-04-13T20:11:34.139904168Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 13 20:11:34.140107 containerd[1505]: time="2026-04-13T20:11:34.140080210Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 13 20:11:34.140340 containerd[1505]: time="2026-04-13T20:11:34.140314728Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 13 20:11:34.140441 containerd[1505]: time="2026-04-13T20:11:34.140417389Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 13 20:11:34.141294 containerd[1505]: time="2026-04-13T20:11:34.140601223Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 13 20:11:34.141294 containerd[1505]: time="2026-04-13T20:11:34.140637166Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 13 20:11:34.141294 containerd[1505]: time="2026-04-13T20:11:34.140658724Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 13 20:11:34.141294 containerd[1505]: time="2026-04-13T20:11:34.140699864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 13 20:11:34.141294 containerd[1505]: time="2026-04-13T20:11:34.140724864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 13 20:11:34.141294 containerd[1505]: time="2026-04-13T20:11:34.140744976Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 13 20:11:34.141294 containerd[1505]: time="2026-04-13T20:11:34.140765678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 13 20:11:34.141294 containerd[1505]: time="2026-04-13T20:11:34.140785978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 13 20:11:34.141294 containerd[1505]: time="2026-04-13T20:11:34.140828685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 13 20:11:34.141294 containerd[1505]: time="2026-04-13T20:11:34.140952800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 13 20:11:34.141294 containerd[1505]: time="2026-04-13T20:11:34.140982618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 13 20:11:34.141294 containerd[1505]: time="2026-04-13T20:11:34.141005923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 13 20:11:34.141294 containerd[1505]: time="2026-04-13T20:11:34.141029078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 13 20:11:34.141294 containerd[1505]: time="2026-04-13T20:11:34.141050358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 13 20:11:34.141901 containerd[1505]: time="2026-04-13T20:11:34.141072396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 13 20:11:34.141901 containerd[1505]: time="2026-04-13T20:11:34.141094838Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 13 20:11:34.141901 containerd[1505]: time="2026-04-13T20:11:34.141119608Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 13 20:11:34.141901 containerd[1505]: time="2026-04-13T20:11:34.141165865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 13 20:11:34.141901 containerd[1505]: time="2026-04-13T20:11:34.141189796Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 13 20:11:34.141901 containerd[1505]: time="2026-04-13T20:11:34.141207335Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 13 20:11:34.142497 containerd[1505]: time="2026-04-13T20:11:34.142193423Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 13 20:11:34.142497 containerd[1505]: time="2026-04-13T20:11:34.142356253Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 13 20:11:34.142497 containerd[1505]: time="2026-04-13T20:11:34.142381743Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 13 20:11:34.142497 containerd[1505]: time="2026-04-13T20:11:34.142401341Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 13 20:11:34.142497 containerd[1505]: time="2026-04-13T20:11:34.142420698Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 13 20:11:34.142497 containerd[1505]: time="2026-04-13T20:11:34.142441039Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 13 20:11:34.143492 containerd[1505]: time="2026-04-13T20:11:34.142464359Z" level=info msg="NRI interface is disabled by configuration." Apr 13 20:11:34.143492 containerd[1505]: time="2026-04-13T20:11:34.142818700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 13 20:11:34.143580 containerd[1505]: time="2026-04-13T20:11:34.143281339Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 13 20:11:34.143580 containerd[1505]: time="2026-04-13T20:11:34.143367606Z" level=info msg="Connect containerd service" Apr 13 20:11:34.143580 containerd[1505]: time="2026-04-13T20:11:34.143421941Z" level=info msg="using legacy CRI server" Apr 13 20:11:34.143580 containerd[1505]: time="2026-04-13T20:11:34.143438861Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 13 20:11:34.144209 containerd[1505]: time="2026-04-13T20:11:34.144178480Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 13 20:11:34.145799 containerd[1505]: time="2026-04-13T20:11:34.145498078Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 20:11:34.146510 containerd[1505]: time="2026-04-13T20:11:34.145945572Z" level=info msg="Start subscribing containerd event" Apr 13 20:11:34.146510 containerd[1505]: time="2026-04-13T20:11:34.146054810Z" level=info msg="Start recovering state" Apr 13 20:11:34.146510 containerd[1505]: time="2026-04-13T20:11:34.146212867Z" level=info msg="Start event monitor" Apr 13 20:11:34.146510 containerd[1505]: time="2026-04-13T20:11:34.146250612Z" level=info msg="Start snapshots syncer" Apr 13 20:11:34.146510 containerd[1505]: time="2026-04-13T20:11:34.146274931Z" level=info msg="Start cni network conf syncer for default" Apr 13 20:11:34.146510 containerd[1505]: time="2026-04-13T20:11:34.146290240Z" level=info msg="Start streaming server" Apr 13 20:11:34.147699 containerd[1505]: time="2026-04-13T20:11:34.147670503Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 13 20:11:34.148017 containerd[1505]: time="2026-04-13T20:11:34.147992310Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 13 20:11:34.149094 containerd[1505]: time="2026-04-13T20:11:34.149063946Z" level=info msg="containerd successfully booted in 0.133129s" Apr 13 20:11:34.149207 systemd[1]: Started containerd.service - containerd container runtime. Apr 13 20:11:34.247173 systemd-networkd[1414]: eth0: Gained IPv6LL Apr 13 20:11:34.252433 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 13 20:11:34.254553 systemd[1]: Reached target network-online.target - Network is Online. Apr 13 20:11:34.267613 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:11:34.294121 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 13 20:11:34.360486 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 13 20:11:34.625431 tar[1500]: linux-amd64/README.md Apr 13 20:11:34.641984 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 13 20:11:35.339824 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:11:35.352414 (kubelet)[1602]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 20:11:35.759304 systemd-networkd[1414]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:3b2:24:19ff:fef4:eca/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:3b2:24:19ff:fef4:eca/64 assigned by NDisc. Apr 13 20:11:35.759319 systemd-networkd[1414]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Apr 13 20:11:36.021564 kubelet[1602]: E0413 20:11:36.021159 1602 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 20:11:36.024239 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 20:11:36.024564 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 20:11:36.025324 systemd[1]: kubelet.service: Consumed 1.123s CPU time. Apr 13 20:11:36.781879 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 13 20:11:36.788144 systemd[1]: Started sshd@0-10.244.14.202:22-4.175.71.9:46874.service - OpenSSH per-connection server daemon (4.175.71.9:46874). Apr 13 20:11:37.008712 sshd[1612]: Accepted publickey for core from 4.175.71.9 port 46874 ssh2: RSA SHA256:gQAv84QRWsNzQzQGG1TKteeG+h41qyFSg3i58ChsR9o Apr 13 20:11:37.012176 sshd[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:11:37.032638 systemd-logind[1487]: New session 1 of user core. Apr 13 20:11:37.035716 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 13 20:11:37.049032 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 13 20:11:37.071727 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 13 20:11:37.082205 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 13 20:11:37.098650 (systemd)[1617]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 13 20:11:37.257641 systemd[1617]: Queued start job for default target default.target. Apr 13 20:11:37.269576 systemd[1617]: Created slice app.slice - User Application Slice. Apr 13 20:11:37.269628 systemd[1617]: Reached target paths.target - Paths. Apr 13 20:11:37.269653 systemd[1617]: Reached target timers.target - Timers. Apr 13 20:11:37.272023 systemd[1617]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 13 20:11:37.290558 systemd[1617]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 13 20:11:37.291815 systemd[1617]: Reached target sockets.target - Sockets. Apr 13 20:11:37.291966 systemd[1617]: Reached target basic.target - Basic System. Apr 13 20:11:37.292172 systemd[1617]: Reached target default.target - Main User Target. Apr 13 20:11:37.292291 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 13 20:11:37.292507 systemd[1617]: Startup finished in 183ms. Apr 13 20:11:37.304931 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 13 20:11:37.468687 systemd[1]: Started sshd@1-10.244.14.202:22-4.175.71.9:46884.service - OpenSSH per-connection server daemon (4.175.71.9:46884). Apr 13 20:11:37.611183 sshd[1628]: Accepted publickey for core from 4.175.71.9 port 46884 ssh2: RSA SHA256:gQAv84QRWsNzQzQGG1TKteeG+h41qyFSg3i58ChsR9o Apr 13 20:11:37.613235 sshd[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:11:37.624656 systemd-logind[1487]: New session 2 of user core. Apr 13 20:11:37.637943 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 13 20:11:37.745051 sshd[1628]: pam_unix(sshd:session): session closed for user core Apr 13 20:11:37.750689 systemd[1]: sshd@1-10.244.14.202:22-4.175.71.9:46884.service: Deactivated successfully. Apr 13 20:11:37.753189 systemd[1]: session-2.scope: Deactivated successfully. Apr 13 20:11:37.755344 systemd-logind[1487]: Session 2 logged out. Waiting for processes to exit. Apr 13 20:11:37.757316 systemd-logind[1487]: Removed session 2. Apr 13 20:11:37.778089 systemd[1]: Started sshd@2-10.244.14.202:22-4.175.71.9:46900.service - OpenSSH per-connection server daemon (4.175.71.9:46900). Apr 13 20:11:37.900186 sshd[1635]: Accepted publickey for core from 4.175.71.9 port 46900 ssh2: RSA SHA256:gQAv84QRWsNzQzQGG1TKteeG+h41qyFSg3i58ChsR9o Apr 13 20:11:37.904543 sshd[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:11:37.912451 systemd-logind[1487]: New session 3 of user core. Apr 13 20:11:37.918659 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 13 20:11:38.024792 sshd[1635]: pam_unix(sshd:session): session closed for user core Apr 13 20:11:38.030655 systemd[1]: sshd@2-10.244.14.202:22-4.175.71.9:46900.service: Deactivated successfully. Apr 13 20:11:38.034061 systemd[1]: session-3.scope: Deactivated successfully. Apr 13 20:11:38.035975 systemd-logind[1487]: Session 3 logged out. Waiting for processes to exit. Apr 13 20:11:38.037616 systemd-logind[1487]: Removed session 3. Apr 13 20:11:39.206128 login[1578]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 13 20:11:39.209370 login[1579]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 13 20:11:39.216639 systemd-logind[1487]: New session 4 of user core. Apr 13 20:11:39.227459 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 13 20:11:39.234562 systemd-logind[1487]: New session 5 of user core. Apr 13 20:11:39.240974 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 13 20:11:40.312167 coreos-metadata[1477]: Apr 13 20:11:40.312 WARN failed to locate config-drive, using the metadata service API instead Apr 13 20:11:40.334261 coreos-metadata[1477]: Apr 13 20:11:40.334 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Apr 13 20:11:40.340307 coreos-metadata[1477]: Apr 13 20:11:40.340 INFO Fetch failed with 404: resource not found Apr 13 20:11:40.340307 coreos-metadata[1477]: Apr 13 20:11:40.340 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Apr 13 20:11:40.341334 coreos-metadata[1477]: Apr 13 20:11:40.341 INFO Fetch successful Apr 13 20:11:40.341536 coreos-metadata[1477]: Apr 13 20:11:40.341 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Apr 13 20:11:40.355214 coreos-metadata[1477]: Apr 13 20:11:40.355 INFO Fetch successful Apr 13 20:11:40.355214 coreos-metadata[1477]: Apr 13 20:11:40.355 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Apr 13 20:11:40.368257 coreos-metadata[1477]: Apr 13 20:11:40.368 INFO Fetch successful Apr 13 20:11:40.368257 coreos-metadata[1477]: Apr 13 20:11:40.368 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Apr 13 20:11:40.383082 coreos-metadata[1477]: Apr 13 20:11:40.382 INFO Fetch successful Apr 13 20:11:40.383082 coreos-metadata[1477]: Apr 13 20:11:40.383 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Apr 13 20:11:40.399344 coreos-metadata[1477]: Apr 13 20:11:40.399 INFO Fetch successful Apr 13 20:11:40.447955 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 13 20:11:40.449304 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 13 20:11:41.035970 coreos-metadata[1557]: Apr 13 20:11:41.035 WARN failed to locate config-drive, using the metadata service API instead Apr 13 20:11:41.062347 coreos-metadata[1557]: Apr 13 20:11:41.062 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Apr 13 20:11:41.089941 coreos-metadata[1557]: Apr 13 20:11:41.089 INFO Fetch successful Apr 13 20:11:41.090577 coreos-metadata[1557]: Apr 13 20:11:41.090 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 13 20:11:41.123935 coreos-metadata[1557]: Apr 13 20:11:41.123 INFO Fetch successful Apr 13 20:11:41.126284 unknown[1557]: wrote ssh authorized keys file for user: core Apr 13 20:11:41.153162 update-ssh-keys[1676]: Updated "/home/core/.ssh/authorized_keys" Apr 13 20:11:41.154368 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 13 20:11:41.157570 systemd[1]: Finished sshkeys.service. Apr 13 20:11:41.161446 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 13 20:11:41.161862 systemd[1]: Startup finished in 1.424s (kernel) + 15.909s (initrd) + 12.332s (userspace) = 29.667s. Apr 13 20:11:46.275058 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 13 20:11:46.283867 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:11:46.465601 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:11:46.479060 (kubelet)[1688]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 20:11:46.560189 kubelet[1688]: E0413 20:11:46.559946 1688 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 20:11:46.567102 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 20:11:46.567684 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 20:11:48.066282 systemd[1]: Started sshd@3-10.244.14.202:22-4.175.71.9:56204.service - OpenSSH per-connection server daemon (4.175.71.9:56204). Apr 13 20:11:48.189519 sshd[1696]: Accepted publickey for core from 4.175.71.9 port 56204 ssh2: RSA SHA256:gQAv84QRWsNzQzQGG1TKteeG+h41qyFSg3i58ChsR9o Apr 13 20:11:48.191333 sshd[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:11:48.200148 systemd-logind[1487]: New session 6 of user core. Apr 13 20:11:48.211727 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 13 20:11:48.316875 sshd[1696]: pam_unix(sshd:session): session closed for user core Apr 13 20:11:48.322271 systemd[1]: sshd@3-10.244.14.202:22-4.175.71.9:56204.service: Deactivated successfully. Apr 13 20:11:48.325584 systemd[1]: session-6.scope: Deactivated successfully. Apr 13 20:11:48.326720 systemd-logind[1487]: Session 6 logged out. Waiting for processes to exit. Apr 13 20:11:48.328594 systemd-logind[1487]: Removed session 6. Apr 13 20:11:48.348857 systemd[1]: Started sshd@4-10.244.14.202:22-4.175.71.9:56208.service - OpenSSH per-connection server daemon (4.175.71.9:56208). Apr 13 20:11:48.475587 sshd[1703]: Accepted publickey for core from 4.175.71.9 port 56208 ssh2: RSA SHA256:gQAv84QRWsNzQzQGG1TKteeG+h41qyFSg3i58ChsR9o Apr 13 20:11:48.477937 sshd[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:11:48.484845 systemd-logind[1487]: New session 7 of user core. Apr 13 20:11:48.496708 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 13 20:11:48.596609 sshd[1703]: pam_unix(sshd:session): session closed for user core Apr 13 20:11:48.600966 systemd[1]: sshd@4-10.244.14.202:22-4.175.71.9:56208.service: Deactivated successfully. Apr 13 20:11:48.603217 systemd[1]: session-7.scope: Deactivated successfully. Apr 13 20:11:48.605111 systemd-logind[1487]: Session 7 logged out. Waiting for processes to exit. Apr 13 20:11:48.606829 systemd-logind[1487]: Removed session 7. Apr 13 20:11:48.624600 systemd[1]: Started sshd@5-10.244.14.202:22-4.175.71.9:56224.service - OpenSSH per-connection server daemon (4.175.71.9:56224). Apr 13 20:11:48.903736 sshd[1710]: Accepted publickey for core from 4.175.71.9 port 56224 ssh2: RSA SHA256:gQAv84QRWsNzQzQGG1TKteeG+h41qyFSg3i58ChsR9o Apr 13 20:11:48.904713 sshd[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:11:48.916430 systemd-logind[1487]: New session 8 of user core. Apr 13 20:11:48.928866 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 13 20:11:49.042816 sshd[1710]: pam_unix(sshd:session): session closed for user core Apr 13 20:11:49.048048 systemd[1]: sshd@5-10.244.14.202:22-4.175.71.9:56224.service: Deactivated successfully. Apr 13 20:11:49.050523 systemd[1]: session-8.scope: Deactivated successfully. Apr 13 20:11:49.051443 systemd-logind[1487]: Session 8 logged out. Waiting for processes to exit. Apr 13 20:11:49.053095 systemd-logind[1487]: Removed session 8. Apr 13 20:11:49.073985 systemd[1]: Started sshd@6-10.244.14.202:22-4.175.71.9:56236.service - OpenSSH per-connection server daemon (4.175.71.9:56236). Apr 13 20:11:49.208629 sshd[1717]: Accepted publickey for core from 4.175.71.9 port 56236 ssh2: RSA SHA256:gQAv84QRWsNzQzQGG1TKteeG+h41qyFSg3i58ChsR9o Apr 13 20:11:49.210060 sshd[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:11:49.217240 systemd-logind[1487]: New session 9 of user core. Apr 13 20:11:49.222758 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 13 20:11:49.326957 sudo[1720]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 13 20:11:49.327507 sudo[1720]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:11:49.345639 sudo[1720]: pam_unix(sudo:session): session closed for user root Apr 13 20:11:49.362516 sshd[1717]: pam_unix(sshd:session): session closed for user core Apr 13 20:11:49.366991 systemd-logind[1487]: Session 9 logged out. Waiting for processes to exit. Apr 13 20:11:49.367899 systemd[1]: sshd@6-10.244.14.202:22-4.175.71.9:56236.service: Deactivated successfully. Apr 13 20:11:49.370386 systemd[1]: session-9.scope: Deactivated successfully. Apr 13 20:11:49.372844 systemd-logind[1487]: Removed session 9. Apr 13 20:11:49.391814 systemd[1]: Started sshd@7-10.244.14.202:22-4.175.71.9:56242.service - OpenSSH per-connection server daemon (4.175.71.9:56242). Apr 13 20:11:49.559656 sshd[1725]: Accepted publickey for core from 4.175.71.9 port 56242 ssh2: RSA SHA256:gQAv84QRWsNzQzQGG1TKteeG+h41qyFSg3i58ChsR9o Apr 13 20:11:49.561299 sshd[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:11:49.569773 systemd-logind[1487]: New session 10 of user core. Apr 13 20:11:49.580925 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 13 20:11:49.676334 sudo[1729]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 13 20:11:49.676839 sudo[1729]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:11:49.683014 sudo[1729]: pam_unix(sudo:session): session closed for user root Apr 13 20:11:49.691303 sudo[1728]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 13 20:11:49.691809 sudo[1728]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:11:49.718929 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 13 20:11:49.721906 auditctl[1732]: No rules Apr 13 20:11:49.722692 systemd[1]: audit-rules.service: Deactivated successfully. Apr 13 20:11:49.723029 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 13 20:11:49.726850 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 20:11:49.780220 augenrules[1750]: No rules Apr 13 20:11:49.781271 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 20:11:49.783390 sudo[1728]: pam_unix(sudo:session): session closed for user root Apr 13 20:11:49.801754 sshd[1725]: pam_unix(sshd:session): session closed for user core Apr 13 20:11:49.806082 systemd-logind[1487]: Session 10 logged out. Waiting for processes to exit. Apr 13 20:11:49.806722 systemd[1]: sshd@7-10.244.14.202:22-4.175.71.9:56242.service: Deactivated successfully. Apr 13 20:11:49.809168 systemd[1]: session-10.scope: Deactivated successfully. Apr 13 20:11:49.812544 systemd-logind[1487]: Removed session 10. Apr 13 20:11:49.841323 systemd[1]: Started sshd@8-10.244.14.202:22-4.175.71.9:56248.service - OpenSSH per-connection server daemon (4.175.71.9:56248). Apr 13 20:11:49.962584 sshd[1758]: Accepted publickey for core from 4.175.71.9 port 56248 ssh2: RSA SHA256:gQAv84QRWsNzQzQGG1TKteeG+h41qyFSg3i58ChsR9o Apr 13 20:11:49.964792 sshd[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:11:49.972315 systemd-logind[1487]: New session 11 of user core. Apr 13 20:11:49.982447 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 13 20:11:50.073909 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 13 20:11:50.074371 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:11:50.572050 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 13 20:11:50.572289 (dockerd)[1777]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 13 20:11:51.048349 dockerd[1777]: time="2026-04-13T20:11:51.047953498Z" level=info msg="Starting up" Apr 13 20:11:51.194737 systemd[1]: var-lib-docker-metacopy\x2dcheck198274995-merged.mount: Deactivated successfully. Apr 13 20:11:51.217609 dockerd[1777]: time="2026-04-13T20:11:51.217531976Z" level=info msg="Loading containers: start." Apr 13 20:11:51.361201 kernel: Initializing XFRM netlink socket Apr 13 20:11:51.476939 systemd-networkd[1414]: docker0: Link UP Apr 13 20:11:51.499568 dockerd[1777]: time="2026-04-13T20:11:51.498877386Z" level=info msg="Loading containers: done." Apr 13 20:11:51.528795 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck290506726-merged.mount: Deactivated successfully. Apr 13 20:11:51.543287 dockerd[1777]: time="2026-04-13T20:11:51.543204671Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 13 20:11:51.543522 dockerd[1777]: time="2026-04-13T20:11:51.543393105Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 13 20:11:51.543639 dockerd[1777]: time="2026-04-13T20:11:51.543611329Z" level=info msg="Daemon has completed initialization" Apr 13 20:11:51.583414 dockerd[1777]: time="2026-04-13T20:11:51.582797032Z" level=info msg="API listen on /run/docker.sock" Apr 13 20:11:51.583111 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 13 20:11:52.548301 containerd[1505]: time="2026-04-13T20:11:52.548205248Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\"" Apr 13 20:11:53.530666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount695408422.mount: Deactivated successfully. Apr 13 20:11:55.846397 containerd[1505]: time="2026-04-13T20:11:55.846147651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:11:55.849787 containerd[1505]: time="2026-04-13T20:11:55.849713780Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.10: active requests=0, bytes read=29989427" Apr 13 20:11:55.855544 containerd[1505]: time="2026-04-13T20:11:55.854873869Z" level=info msg="ImageCreate event name:\"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:11:55.858699 containerd[1505]: time="2026-04-13T20:11:55.858636709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:11:55.861430 containerd[1505]: time="2026-04-13T20:11:55.860409357Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.10\" with image id \"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\", size \"29986018\" in 3.312095582s" Apr 13 20:11:55.861430 containerd[1505]: time="2026-04-13T20:11:55.860534559Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\" returns image reference \"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\"" Apr 13 20:11:55.863966 containerd[1505]: time="2026-04-13T20:11:55.863922676Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\"" Apr 13 20:11:56.819841 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 13 20:11:56.830835 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:11:57.039705 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:11:57.048012 (kubelet)[1984]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 20:11:57.131209 kubelet[1984]: E0413 20:11:57.130743 1984 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 20:11:57.134640 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 20:11:57.134958 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 20:11:58.518528 containerd[1505]: time="2026-04-13T20:11:58.517108845Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:11:58.518528 containerd[1505]: time="2026-04-13T20:11:58.518508125Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.10: active requests=0, bytes read=26021917" Apr 13 20:11:58.520080 containerd[1505]: time="2026-04-13T20:11:58.520034598Z" level=info msg="ImageCreate event name:\"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:11:58.525909 containerd[1505]: time="2026-04-13T20:11:58.525846889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:11:58.527809 containerd[1505]: time="2026-04-13T20:11:58.527386209Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.10\" with image id \"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\", size \"27552094\" in 2.663410825s" Apr 13 20:11:58.527809 containerd[1505]: time="2026-04-13T20:11:58.527486005Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\" returns image reference \"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\"" Apr 13 20:11:58.530063 containerd[1505]: time="2026-04-13T20:11:58.529975959Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\"" Apr 13 20:12:00.272972 containerd[1505]: time="2026-04-13T20:12:00.271449667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:12:00.272972 containerd[1505]: time="2026-04-13T20:12:00.272916027Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.10: active requests=0, bytes read=20162761" Apr 13 20:12:00.274214 containerd[1505]: time="2026-04-13T20:12:00.274152215Z" level=info msg="ImageCreate event name:\"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:12:00.279095 containerd[1505]: time="2026-04-13T20:12:00.278198843Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:12:00.283499 containerd[1505]: time="2026-04-13T20:12:00.282347572Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.10\" with image id \"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\", size \"21692956\" in 1.752320628s" Apr 13 20:12:00.283499 containerd[1505]: time="2026-04-13T20:12:00.282403753Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\" returns image reference \"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\"" Apr 13 20:12:00.285411 containerd[1505]: time="2026-04-13T20:12:00.285336986Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\"" Apr 13 20:12:01.808604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount829137488.mount: Deactivated successfully. Apr 13 20:12:02.582197 containerd[1505]: time="2026-04-13T20:12:02.582006233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:12:02.584715 containerd[1505]: time="2026-04-13T20:12:02.584358921Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.10: active requests=0, bytes read=31828771" Apr 13 20:12:02.585717 containerd[1505]: time="2026-04-13T20:12:02.585613266Z" level=info msg="ImageCreate event name:\"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:12:02.589134 containerd[1505]: time="2026-04-13T20:12:02.588776572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:12:02.589896 containerd[1505]: time="2026-04-13T20:12:02.589851323Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.10\" with image id \"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\", repo tag \"registry.k8s.io/kube-proxy:v1.33.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\", size \"31827782\" in 2.304301325s" Apr 13 20:12:02.589980 containerd[1505]: time="2026-04-13T20:12:02.589933570Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\" returns image reference \"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\"" Apr 13 20:12:02.592249 containerd[1505]: time="2026-04-13T20:12:02.592193098Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 13 20:12:03.192270 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3707487467.mount: Deactivated successfully. Apr 13 20:12:04.660697 containerd[1505]: time="2026-04-13T20:12:04.660251996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:12:04.661909 containerd[1505]: time="2026-04-13T20:12:04.661853437Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Apr 13 20:12:04.663936 containerd[1505]: time="2026-04-13T20:12:04.662566241Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:12:04.667198 containerd[1505]: time="2026-04-13T20:12:04.666634982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:12:04.668766 containerd[1505]: time="2026-04-13T20:12:04.668434220Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.076074573s" Apr 13 20:12:04.668766 containerd[1505]: time="2026-04-13T20:12:04.668506347Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 13 20:12:04.669939 containerd[1505]: time="2026-04-13T20:12:04.669669058Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 13 20:12:05.672389 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3036089664.mount: Deactivated successfully. Apr 13 20:12:05.679000 containerd[1505]: time="2026-04-13T20:12:05.678917704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:12:05.680496 containerd[1505]: time="2026-04-13T20:12:05.680274858Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Apr 13 20:12:05.681626 containerd[1505]: time="2026-04-13T20:12:05.681551163Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:12:05.684646 containerd[1505]: time="2026-04-13T20:12:05.684565659Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:12:05.686727 containerd[1505]: time="2026-04-13T20:12:05.685865181Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.016156039s" Apr 13 20:12:05.686727 containerd[1505]: time="2026-04-13T20:12:05.685910594Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 13 20:12:05.686727 containerd[1505]: time="2026-04-13T20:12:05.686682744Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 13 20:12:05.789904 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 13 20:12:06.304432 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount962493554.mount: Deactivated successfully. Apr 13 20:12:07.268591 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 13 20:12:07.281856 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:12:07.517954 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:12:07.560143 (kubelet)[2119]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 20:12:07.654836 kubelet[2119]: E0413 20:12:07.654719 2119 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 20:12:07.657691 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 20:12:07.658153 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 20:12:10.027277 containerd[1505]: time="2026-04-13T20:12:10.027112405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:12:10.034069 containerd[1505]: time="2026-04-13T20:12:10.033776800Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718848" Apr 13 20:12:10.036135 containerd[1505]: time="2026-04-13T20:12:10.035521098Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:12:10.040336 containerd[1505]: time="2026-04-13T20:12:10.040290912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:12:10.042144 containerd[1505]: time="2026-04-13T20:12:10.042086981Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 4.355281593s" Apr 13 20:12:10.042285 containerd[1505]: time="2026-04-13T20:12:10.042149272Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 13 20:12:14.879300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:12:14.892747 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:12:14.928575 systemd[1]: Reloading requested from client PID 2173 ('systemctl') (unit session-11.scope)... Apr 13 20:12:14.928620 systemd[1]: Reloading... Apr 13 20:12:15.144992 zram_generator::config[2213]: No configuration found. Apr 13 20:12:15.289577 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:12:15.400840 systemd[1]: Reloading finished in 471 ms. Apr 13 20:12:15.478976 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:12:15.484657 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:12:15.488461 systemd[1]: kubelet.service: Deactivated successfully. Apr 13 20:12:15.488957 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:12:15.495923 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:12:15.672612 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:12:15.679532 (kubelet)[2281]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 20:12:15.748797 kubelet[2281]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 20:12:15.750698 kubelet[2281]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 20:12:15.750698 kubelet[2281]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 20:12:15.750698 kubelet[2281]: I0413 20:12:15.750587 2281 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 20:12:16.735005 kubelet[2281]: I0413 20:12:16.734328 2281 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 13 20:12:16.735005 kubelet[2281]: I0413 20:12:16.734566 2281 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 20:12:16.735005 kubelet[2281]: I0413 20:12:16.735006 2281 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 20:12:16.775278 kubelet[2281]: I0413 20:12:16.774123 2281 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 20:12:16.775278 kubelet[2281]: E0413 20:12:16.774968 2281 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.244.14.202:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.14.202:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 20:12:16.787015 kubelet[2281]: E0413 20:12:16.786950 2281 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 20:12:16.787015 kubelet[2281]: I0413 20:12:16.787000 2281 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 13 20:12:16.798098 kubelet[2281]: I0413 20:12:16.798035 2281 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 13 20:12:16.801229 kubelet[2281]: I0413 20:12:16.801162 2281 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 20:12:16.805221 kubelet[2281]: I0413 20:12:16.801206 2281 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-pcqx3.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 13 20:12:16.805546 kubelet[2281]: I0413 20:12:16.805228 2281 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 20:12:16.805546 kubelet[2281]: I0413 20:12:16.805249 2281 container_manager_linux.go:303] "Creating device plugin manager" Apr 13 20:12:16.805546 kubelet[2281]: I0413 20:12:16.805501 2281 state_mem.go:36] "Initialized new in-memory state store" Apr 13 20:12:16.810967 kubelet[2281]: I0413 20:12:16.810918 2281 kubelet.go:480] "Attempting to sync node with API server" Apr 13 20:12:16.811064 kubelet[2281]: I0413 20:12:16.810974 2281 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 20:12:16.811064 kubelet[2281]: I0413 20:12:16.811039 2281 kubelet.go:386] "Adding apiserver pod source" Apr 13 20:12:16.813258 kubelet[2281]: I0413 20:12:16.813211 2281 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 20:12:16.823231 kubelet[2281]: E0413 20:12:16.822762 2281 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.244.14.202:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-pcqx3.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.14.202:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 20:12:16.823525 kubelet[2281]: I0413 20:12:16.823498 2281 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 20:12:16.824483 kubelet[2281]: I0413 20:12:16.824337 2281 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 20:12:16.825505 kubelet[2281]: W0413 20:12:16.825214 2281 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 13 20:12:16.830361 kubelet[2281]: E0413 20:12:16.830314 2281 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.244.14.202:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.14.202:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 20:12:16.838681 kubelet[2281]: I0413 20:12:16.838655 2281 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 13 20:12:16.838866 kubelet[2281]: I0413 20:12:16.838835 2281 server.go:1289] "Started kubelet" Apr 13 20:12:16.842461 kubelet[2281]: I0413 20:12:16.842246 2281 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 20:12:16.851536 kubelet[2281]: E0413 20:12:16.845619 2281 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.14.202:6443/api/v1/namespaces/default/events\": dial tcp 10.244.14.202:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-pcqx3.gb1.brightbox.com.18a603ab30e40a1c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-pcqx3.gb1.brightbox.com,UID:srv-pcqx3.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-pcqx3.gb1.brightbox.com,},FirstTimestamp:2026-04-13 20:12:16.838781468 +0000 UTC m=+1.151345857,LastTimestamp:2026-04-13 20:12:16.838781468 +0000 UTC m=+1.151345857,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-pcqx3.gb1.brightbox.com,}" Apr 13 20:12:16.851536 kubelet[2281]: I0413 20:12:16.849872 2281 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 20:12:16.851536 kubelet[2281]: I0413 20:12:16.851109 2281 server.go:317] "Adding debug handlers to kubelet server" Apr 13 20:12:16.856304 kubelet[2281]: I0413 20:12:16.856216 2281 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 13 20:12:16.858507 kubelet[2281]: E0413 20:12:16.857911 2281 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-pcqx3.gb1.brightbox.com\" not found" Apr 13 20:12:16.858507 kubelet[2281]: I0413 20:12:16.857280 2281 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 20:12:16.858507 kubelet[2281]: I0413 20:12:16.858417 2281 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 13 20:12:16.858507 kubelet[2281]: I0413 20:12:16.857006 2281 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 20:12:16.861122 kubelet[2281]: E0413 20:12:16.860622 2281 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.14.202:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-pcqx3.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.14.202:6443: connect: connection refused" interval="200ms" Apr 13 20:12:16.861122 kubelet[2281]: I0413 20:12:16.860735 2281 reconciler.go:26] "Reconciler: start to sync state" Apr 13 20:12:16.861663 kubelet[2281]: E0413 20:12:16.861560 2281 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.244.14.202:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.14.202:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 20:12:16.863591 kubelet[2281]: I0413 20:12:16.863565 2281 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 20:12:16.868589 kubelet[2281]: I0413 20:12:16.868545 2281 factory.go:223] Registration of the systemd container factory successfully Apr 13 20:12:16.869027 kubelet[2281]: I0413 20:12:16.868975 2281 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 20:12:16.874490 kubelet[2281]: I0413 20:12:16.872811 2281 factory.go:223] Registration of the containerd container factory successfully Apr 13 20:12:16.878699 kubelet[2281]: E0413 20:12:16.878665 2281 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 20:12:16.914935 kubelet[2281]: I0413 20:12:16.914902 2281 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 20:12:16.916135 kubelet[2281]: I0413 20:12:16.916105 2281 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 20:12:16.916322 kubelet[2281]: I0413 20:12:16.916301 2281 state_mem.go:36] "Initialized new in-memory state store" Apr 13 20:12:16.917024 kubelet[2281]: I0413 20:12:16.916030 2281 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 13 20:12:16.918767 kubelet[2281]: I0413 20:12:16.918742 2281 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 13 20:12:16.918924 kubelet[2281]: I0413 20:12:16.918904 2281 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 13 20:12:16.919047 kubelet[2281]: I0413 20:12:16.919027 2281 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 20:12:16.919164 kubelet[2281]: I0413 20:12:16.919146 2281 kubelet.go:2436] "Starting kubelet main sync loop" Apr 13 20:12:16.919331 kubelet[2281]: E0413 20:12:16.919303 2281 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 20:12:16.925803 kubelet[2281]: E0413 20:12:16.925770 2281 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.244.14.202:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.14.202:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 20:12:16.926222 kubelet[2281]: I0413 20:12:16.926199 2281 policy_none.go:49] "None policy: Start" Apr 13 20:12:16.926357 kubelet[2281]: I0413 20:12:16.926336 2281 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 13 20:12:16.926506 kubelet[2281]: I0413 20:12:16.926466 2281 state_mem.go:35] "Initializing new in-memory state store" Apr 13 20:12:16.935868 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 13 20:12:16.951422 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 13 20:12:16.956451 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 13 20:12:16.958864 kubelet[2281]: E0413 20:12:16.958779 2281 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-pcqx3.gb1.brightbox.com\" not found" Apr 13 20:12:16.964285 kubelet[2281]: E0413 20:12:16.964253 2281 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 20:12:16.965155 kubelet[2281]: I0413 20:12:16.964599 2281 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 20:12:16.965155 kubelet[2281]: I0413 20:12:16.964631 2281 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 20:12:16.965155 kubelet[2281]: I0413 20:12:16.965058 2281 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 20:12:16.967309 kubelet[2281]: E0413 20:12:16.967284 2281 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 20:12:16.967597 kubelet[2281]: E0413 20:12:16.967559 2281 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-pcqx3.gb1.brightbox.com\" not found" Apr 13 20:12:17.039955 systemd[1]: Created slice kubepods-burstable-pod65e23e433bf3ab6bbc3c3327e093e0a7.slice - libcontainer container kubepods-burstable-pod65e23e433bf3ab6bbc3c3327e093e0a7.slice. Apr 13 20:12:17.053602 kubelet[2281]: E0413 20:12:17.052642 2281 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-pcqx3.gb1.brightbox.com\" not found" node="srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:17.062083 systemd[1]: Created slice kubepods-burstable-pod4a7e5fe162a9fe0e620b53877ef0203a.slice - libcontainer container kubepods-burstable-pod4a7e5fe162a9fe0e620b53877ef0203a.slice. Apr 13 20:12:17.065103 kubelet[2281]: I0413 20:12:17.063573 2281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4a7e5fe162a9fe0e620b53877ef0203a-flexvolume-dir\") pod \"kube-controller-manager-srv-pcqx3.gb1.brightbox.com\" (UID: \"4a7e5fe162a9fe0e620b53877ef0203a\") " pod="kube-system/kube-controller-manager-srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:17.065103 kubelet[2281]: I0413 20:12:17.063624 2281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4a7e5fe162a9fe0e620b53877ef0203a-k8s-certs\") pod \"kube-controller-manager-srv-pcqx3.gb1.brightbox.com\" (UID: \"4a7e5fe162a9fe0e620b53877ef0203a\") " pod="kube-system/kube-controller-manager-srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:17.065103 kubelet[2281]: I0413 20:12:17.063668 2281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4a7e5fe162a9fe0e620b53877ef0203a-kubeconfig\") pod \"kube-controller-manager-srv-pcqx3.gb1.brightbox.com\" (UID: \"4a7e5fe162a9fe0e620b53877ef0203a\") " pod="kube-system/kube-controller-manager-srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:17.065103 kubelet[2281]: I0413 20:12:17.063696 2281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3636124157495b5512126e5837236483-kubeconfig\") pod \"kube-scheduler-srv-pcqx3.gb1.brightbox.com\" (UID: \"3636124157495b5512126e5837236483\") " pod="kube-system/kube-scheduler-srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:17.065103 kubelet[2281]: I0413 20:12:17.063727 2281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/65e23e433bf3ab6bbc3c3327e093e0a7-k8s-certs\") pod \"kube-apiserver-srv-pcqx3.gb1.brightbox.com\" (UID: \"65e23e433bf3ab6bbc3c3327e093e0a7\") " pod="kube-system/kube-apiserver-srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:17.066106 kubelet[2281]: I0413 20:12:17.063755 2281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4a7e5fe162a9fe0e620b53877ef0203a-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-pcqx3.gb1.brightbox.com\" (UID: \"4a7e5fe162a9fe0e620b53877ef0203a\") " pod="kube-system/kube-controller-manager-srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:17.066106 kubelet[2281]: I0413 20:12:17.063782 2281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/65e23e433bf3ab6bbc3c3327e093e0a7-ca-certs\") pod \"kube-apiserver-srv-pcqx3.gb1.brightbox.com\" (UID: \"65e23e433bf3ab6bbc3c3327e093e0a7\") " pod="kube-system/kube-apiserver-srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:17.066106 kubelet[2281]: I0413 20:12:17.063809 2281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/65e23e433bf3ab6bbc3c3327e093e0a7-usr-share-ca-certificates\") pod \"kube-apiserver-srv-pcqx3.gb1.brightbox.com\" (UID: \"65e23e433bf3ab6bbc3c3327e093e0a7\") " pod="kube-system/kube-apiserver-srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:17.066106 kubelet[2281]: I0413 20:12:17.063835 2281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4a7e5fe162a9fe0e620b53877ef0203a-ca-certs\") pod \"kube-controller-manager-srv-pcqx3.gb1.brightbox.com\" (UID: \"4a7e5fe162a9fe0e620b53877ef0203a\") " pod="kube-system/kube-controller-manager-srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:17.067679 kubelet[2281]: E0413 20:12:17.066799 2281 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.14.202:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-pcqx3.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.14.202:6443: connect: connection refused" interval="400ms" Apr 13 20:12:17.067679 kubelet[2281]: E0413 20:12:17.067494 2281 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-pcqx3.gb1.brightbox.com\" not found" node="srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:17.069376 systemd[1]: Created slice kubepods-burstable-pod3636124157495b5512126e5837236483.slice - libcontainer container kubepods-burstable-pod3636124157495b5512126e5837236483.slice. Apr 13 20:12:17.071529 kubelet[2281]: I0413 20:12:17.071233 2281 kubelet_node_status.go:75] "Attempting to register node" node="srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:17.073139 kubelet[2281]: E0413 20:12:17.073104 2281 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.14.202:6443/api/v1/nodes\": dial tcp 10.244.14.202:6443: connect: connection refused" node="srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:17.073847 kubelet[2281]: E0413 20:12:17.073563 2281 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-pcqx3.gb1.brightbox.com\" not found" node="srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:17.276512 kubelet[2281]: I0413 20:12:17.276275 2281 kubelet_node_status.go:75] "Attempting to register node" node="srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:17.276985 kubelet[2281]: E0413 20:12:17.276936 2281 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.14.202:6443/api/v1/nodes\": dial tcp 10.244.14.202:6443: connect: connection refused" node="srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:17.354982 containerd[1505]: time="2026-04-13T20:12:17.354780747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-pcqx3.gb1.brightbox.com,Uid:65e23e433bf3ab6bbc3c3327e093e0a7,Namespace:kube-system,Attempt:0,}" Apr 13 20:12:17.369644 containerd[1505]: time="2026-04-13T20:12:17.369593283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-pcqx3.gb1.brightbox.com,Uid:4a7e5fe162a9fe0e620b53877ef0203a,Namespace:kube-system,Attempt:0,}" Apr 13 20:12:17.375351 containerd[1505]: time="2026-04-13T20:12:17.374894294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-pcqx3.gb1.brightbox.com,Uid:3636124157495b5512126e5837236483,Namespace:kube-system,Attempt:0,}" Apr 13 20:12:17.467498 kubelet[2281]: E0413 20:12:17.467421 2281 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.14.202:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-pcqx3.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.14.202:6443: connect: connection refused" interval="800ms" Apr 13 20:12:17.680842 kubelet[2281]: I0413 20:12:17.680606 2281 kubelet_node_status.go:75] "Attempting to register node" node="srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:17.681924 kubelet[2281]: E0413 20:12:17.681890 2281 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.14.202:6443/api/v1/nodes\": dial tcp 10.244.14.202:6443: connect: connection refused" node="srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:17.849546 kubelet[2281]: E0413 20:12:17.849442 2281 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.244.14.202:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.14.202:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 20:12:17.908969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3952514966.mount: Deactivated successfully. Apr 13 20:12:17.916591 containerd[1505]: time="2026-04-13T20:12:17.916539681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:12:17.920640 containerd[1505]: time="2026-04-13T20:12:17.920568304Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 20:12:17.921361 containerd[1505]: time="2026-04-13T20:12:17.921314107Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:12:17.922524 containerd[1505]: time="2026-04-13T20:12:17.922430134Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:12:17.924275 containerd[1505]: time="2026-04-13T20:12:17.924140883Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Apr 13 20:12:17.925370 containerd[1505]: time="2026-04-13T20:12:17.925195041Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 20:12:17.925370 containerd[1505]: time="2026-04-13T20:12:17.925284553Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:12:17.929854 containerd[1505]: time="2026-04-13T20:12:17.929809235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:12:17.932572 containerd[1505]: time="2026-04-13T20:12:17.932210043Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 577.186249ms" Apr 13 20:12:17.935310 containerd[1505]: time="2026-04-13T20:12:17.935272881Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 560.297626ms" Apr 13 20:12:17.937070 containerd[1505]: time="2026-04-13T20:12:17.937029327Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 567.329646ms" Apr 13 20:12:18.030457 kubelet[2281]: E0413 20:12:18.030369 2281 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.244.14.202:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.14.202:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 20:12:18.166215 containerd[1505]: time="2026-04-13T20:12:18.165773518Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:12:18.167352 containerd[1505]: time="2026-04-13T20:12:18.166390937Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:12:18.167352 containerd[1505]: time="2026-04-13T20:12:18.166438607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:12:18.167352 containerd[1505]: time="2026-04-13T20:12:18.166590870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:12:18.167991 kubelet[2281]: E0413 20:12:18.167938 2281 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.244.14.202:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.14.202:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 20:12:18.170247 containerd[1505]: time="2026-04-13T20:12:18.168637416Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:12:18.170247 containerd[1505]: time="2026-04-13T20:12:18.168720869Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:12:18.170247 containerd[1505]: time="2026-04-13T20:12:18.168741838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:12:18.170247 containerd[1505]: time="2026-04-13T20:12:18.168842113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:12:18.171243 containerd[1505]: time="2026-04-13T20:12:18.169894660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:12:18.171243 containerd[1505]: time="2026-04-13T20:12:18.169960404Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:12:18.171243 containerd[1505]: time="2026-04-13T20:12:18.169985016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:12:18.171243 containerd[1505]: time="2026-04-13T20:12:18.170092111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:12:18.250850 systemd[1]: Started cri-containerd-b48405a58beff353614381236993bd29c7a514ddf64ac5a17495c16699fab0ae.scope - libcontainer container b48405a58beff353614381236993bd29c7a514ddf64ac5a17495c16699fab0ae. Apr 13 20:12:18.264863 systemd[1]: Started cri-containerd-485071edfcfa50b064211a4520fee202ea12f43423ebc83d25cd0f7f94b6f929.scope - libcontainer container 485071edfcfa50b064211a4520fee202ea12f43423ebc83d25cd0f7f94b6f929. Apr 13 20:12:18.270835 kubelet[2281]: E0413 20:12:18.269066 2281 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.14.202:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-pcqx3.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.14.202:6443: connect: connection refused" interval="1.6s" Apr 13 20:12:18.269703 systemd[1]: Started cri-containerd-e95df150e5a021ef71439c38173f51052af9b60dc2187beb9b0a36a3372be51f.scope - libcontainer container e95df150e5a021ef71439c38173f51052af9b60dc2187beb9b0a36a3372be51f. Apr 13 20:12:18.302993 kubelet[2281]: E0413 20:12:18.302327 2281 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.244.14.202:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-pcqx3.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.14.202:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 20:12:18.320236 kubelet[2281]: E0413 20:12:18.320057 2281 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.14.202:6443/api/v1/namespaces/default/events\": dial tcp 10.244.14.202:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-pcqx3.gb1.brightbox.com.18a603ab30e40a1c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-pcqx3.gb1.brightbox.com,UID:srv-pcqx3.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-pcqx3.gb1.brightbox.com,},FirstTimestamp:2026-04-13 20:12:16.838781468 +0000 UTC m=+1.151345857,LastTimestamp:2026-04-13 20:12:16.838781468 +0000 UTC m=+1.151345857,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-pcqx3.gb1.brightbox.com,}" Apr 13 20:12:18.367998 containerd[1505]: time="2026-04-13T20:12:18.367946135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-pcqx3.gb1.brightbox.com,Uid:4a7e5fe162a9fe0e620b53877ef0203a,Namespace:kube-system,Attempt:0,} returns sandbox id \"b48405a58beff353614381236993bd29c7a514ddf64ac5a17495c16699fab0ae\"" Apr 13 20:12:18.387902 containerd[1505]: time="2026-04-13T20:12:18.386881607Z" level=info msg="CreateContainer within sandbox \"b48405a58beff353614381236993bd29c7a514ddf64ac5a17495c16699fab0ae\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 13 20:12:18.391822 containerd[1505]: time="2026-04-13T20:12:18.391785088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-pcqx3.gb1.brightbox.com,Uid:65e23e433bf3ab6bbc3c3327e093e0a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"e95df150e5a021ef71439c38173f51052af9b60dc2187beb9b0a36a3372be51f\"" Apr 13 20:12:18.399779 containerd[1505]: time="2026-04-13T20:12:18.399733716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-pcqx3.gb1.brightbox.com,Uid:3636124157495b5512126e5837236483,Namespace:kube-system,Attempt:0,} returns sandbox id \"485071edfcfa50b064211a4520fee202ea12f43423ebc83d25cd0f7f94b6f929\"" Apr 13 20:12:18.401303 containerd[1505]: time="2026-04-13T20:12:18.401177792Z" level=info msg="CreateContainer within sandbox \"e95df150e5a021ef71439c38173f51052af9b60dc2187beb9b0a36a3372be51f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 13 20:12:18.404727 containerd[1505]: time="2026-04-13T20:12:18.404679354Z" level=info msg="CreateContainer within sandbox \"485071edfcfa50b064211a4520fee202ea12f43423ebc83d25cd0f7f94b6f929\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 13 20:12:18.423243 containerd[1505]: time="2026-04-13T20:12:18.423187713Z" level=info msg="CreateContainer within sandbox \"b48405a58beff353614381236993bd29c7a514ddf64ac5a17495c16699fab0ae\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"81c9badc9f70f891b41f01037f3f367e0b09b5314398acc346ba0ea5ff1ab39d\"" Apr 13 20:12:18.425060 containerd[1505]: time="2026-04-13T20:12:18.425019896Z" level=info msg="StartContainer for \"81c9badc9f70f891b41f01037f3f367e0b09b5314398acc346ba0ea5ff1ab39d\"" Apr 13 20:12:18.426915 containerd[1505]: time="2026-04-13T20:12:18.426610759Z" level=info msg="CreateContainer within sandbox \"e95df150e5a021ef71439c38173f51052af9b60dc2187beb9b0a36a3372be51f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f617184e410055db200f21ae0eb7ffe43d75a51754967609bf8ea5cedf63f446\"" Apr 13 20:12:18.428500 containerd[1505]: time="2026-04-13T20:12:18.427511717Z" level=info msg="StartContainer for \"f617184e410055db200f21ae0eb7ffe43d75a51754967609bf8ea5cedf63f446\"" Apr 13 20:12:18.431141 containerd[1505]: time="2026-04-13T20:12:18.431106042Z" level=info msg="CreateContainer within sandbox \"485071edfcfa50b064211a4520fee202ea12f43423ebc83d25cd0f7f94b6f929\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1bfbff75d3b8605fdf29529860ee969763b9d04ab1b4eb429de7b5a3619ff1e1\"" Apr 13 20:12:18.431711 containerd[1505]: time="2026-04-13T20:12:18.431682519Z" level=info msg="StartContainer for \"1bfbff75d3b8605fdf29529860ee969763b9d04ab1b4eb429de7b5a3619ff1e1\"" Apr 13 20:12:18.474772 systemd[1]: Started cri-containerd-81c9badc9f70f891b41f01037f3f367e0b09b5314398acc346ba0ea5ff1ab39d.scope - libcontainer container 81c9badc9f70f891b41f01037f3f367e0b09b5314398acc346ba0ea5ff1ab39d. Apr 13 20:12:18.489445 kubelet[2281]: I0413 20:12:18.489407 2281 kubelet_node_status.go:75] "Attempting to register node" node="srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:18.491039 kubelet[2281]: E0413 20:12:18.490999 2281 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.14.202:6443/api/v1/nodes\": dial tcp 10.244.14.202:6443: connect: connection refused" node="srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:18.491756 systemd[1]: Started cri-containerd-f617184e410055db200f21ae0eb7ffe43d75a51754967609bf8ea5cedf63f446.scope - libcontainer container f617184e410055db200f21ae0eb7ffe43d75a51754967609bf8ea5cedf63f446. Apr 13 20:12:18.495701 update_engine[1488]: I20260413 20:12:18.495582 1488 update_attempter.cc:509] Updating boot flags... Apr 13 20:12:18.507670 systemd[1]: Started cri-containerd-1bfbff75d3b8605fdf29529860ee969763b9d04ab1b4eb429de7b5a3619ff1e1.scope - libcontainer container 1bfbff75d3b8605fdf29529860ee969763b9d04ab1b4eb429de7b5a3619ff1e1. Apr 13 20:12:18.587772 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2528) Apr 13 20:12:18.702575 containerd[1505]: time="2026-04-13T20:12:18.701956155Z" level=info msg="StartContainer for \"81c9badc9f70f891b41f01037f3f367e0b09b5314398acc346ba0ea5ff1ab39d\" returns successfully" Apr 13 20:12:18.724270 containerd[1505]: time="2026-04-13T20:12:18.724130171Z" level=info msg="StartContainer for \"1bfbff75d3b8605fdf29529860ee969763b9d04ab1b4eb429de7b5a3619ff1e1\" returns successfully" Apr 13 20:12:18.761420 containerd[1505]: time="2026-04-13T20:12:18.758793693Z" level=info msg="StartContainer for \"f617184e410055db200f21ae0eb7ffe43d75a51754967609bf8ea5cedf63f446\" returns successfully" Apr 13 20:12:18.815516 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2533) Apr 13 20:12:18.921717 kubelet[2281]: E0413 20:12:18.921663 2281 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.244.14.202:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.14.202:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 20:12:18.948515 kubelet[2281]: E0413 20:12:18.947918 2281 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-pcqx3.gb1.brightbox.com\" not found" node="srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:18.955681 kubelet[2281]: E0413 20:12:18.954382 2281 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-pcqx3.gb1.brightbox.com\" not found" node="srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:18.956997 kubelet[2281]: E0413 20:12:18.956903 2281 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-pcqx3.gb1.brightbox.com\" not found" node="srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:18.973568 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2533) Apr 13 20:12:19.959563 kubelet[2281]: E0413 20:12:19.959176 2281 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-pcqx3.gb1.brightbox.com\" not found" node="srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:19.961767 kubelet[2281]: E0413 20:12:19.961580 2281 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-pcqx3.gb1.brightbox.com\" not found" node="srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:20.094510 kubelet[2281]: I0413 20:12:20.094275 2281 kubelet_node_status.go:75] "Attempting to register node" node="srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:20.970466 kubelet[2281]: E0413 20:12:20.970422 2281 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-pcqx3.gb1.brightbox.com\" not found" node="srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:21.827703 kubelet[2281]: E0413 20:12:21.826812 2281 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-pcqx3.gb1.brightbox.com\" not found" node="srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:21.832458 kubelet[2281]: I0413 20:12:21.831511 2281 apiserver.go:52] "Watching apiserver" Apr 13 20:12:21.859522 kubelet[2281]: I0413 20:12:21.859026 2281 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 13 20:12:21.922423 kubelet[2281]: I0413 20:12:21.920906 2281 kubelet_node_status.go:78] "Successfully registered node" node="srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:21.922423 kubelet[2281]: E0413 20:12:21.921000 2281 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"srv-pcqx3.gb1.brightbox.com\": node \"srv-pcqx3.gb1.brightbox.com\" not found" Apr 13 20:12:21.958888 kubelet[2281]: I0413 20:12:21.958638 2281 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:21.975936 kubelet[2281]: E0413 20:12:21.975872 2281 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-pcqx3.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:21.975936 kubelet[2281]: I0413 20:12:21.975931 2281 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:21.980603 kubelet[2281]: E0413 20:12:21.980558 2281 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-pcqx3.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:21.980603 kubelet[2281]: I0413 20:12:21.980592 2281 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:21.987709 kubelet[2281]: E0413 20:12:21.987670 2281 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-pcqx3.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:23.072738 kubelet[2281]: I0413 20:12:23.072686 2281 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:23.080885 kubelet[2281]: I0413 20:12:23.080852 2281 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 13 20:12:24.224545 systemd[1]: Reloading requested from client PID 2581 ('systemctl') (unit session-11.scope)... Apr 13 20:12:24.224588 systemd[1]: Reloading... Apr 13 20:12:24.346921 zram_generator::config[2620]: No configuration found. Apr 13 20:12:24.542810 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:12:24.676339 systemd[1]: Reloading finished in 451 ms. Apr 13 20:12:24.737846 kubelet[2281]: I0413 20:12:24.737401 2281 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 20:12:24.737782 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:12:24.752090 systemd[1]: kubelet.service: Deactivated successfully. Apr 13 20:12:24.752463 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:12:24.752605 systemd[1]: kubelet.service: Consumed 1.724s CPU time, 128.9M memory peak, 0B memory swap peak. Apr 13 20:12:24.764983 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:12:25.033922 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:12:25.046071 (kubelet)[2684]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 20:12:25.144240 kubelet[2684]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 20:12:25.146276 kubelet[2684]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 20:12:25.146276 kubelet[2684]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 20:12:25.146276 kubelet[2684]: I0413 20:12:25.144686 2684 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 20:12:25.157999 kubelet[2684]: I0413 20:12:25.157925 2684 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 13 20:12:25.157999 kubelet[2684]: I0413 20:12:25.157973 2684 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 20:12:25.158363 kubelet[2684]: I0413 20:12:25.158329 2684 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 20:12:25.160598 kubelet[2684]: I0413 20:12:25.160556 2684 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 13 20:12:25.165135 kubelet[2684]: I0413 20:12:25.164549 2684 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 20:12:25.173095 kubelet[2684]: E0413 20:12:25.170663 2684 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 20:12:25.173095 kubelet[2684]: I0413 20:12:25.170706 2684 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 13 20:12:25.180884 kubelet[2684]: I0413 20:12:25.180449 2684 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 13 20:12:25.182641 kubelet[2684]: I0413 20:12:25.182400 2684 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 20:12:25.183797 kubelet[2684]: I0413 20:12:25.183069 2684 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-pcqx3.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 13 20:12:25.184248 kubelet[2684]: I0413 20:12:25.183828 2684 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 20:12:25.184248 kubelet[2684]: I0413 20:12:25.183849 2684 container_manager_linux.go:303] "Creating device plugin manager" Apr 13 20:12:25.184248 kubelet[2684]: I0413 20:12:25.183977 2684 state_mem.go:36] "Initialized new in-memory state store" Apr 13 20:12:25.184690 kubelet[2684]: I0413 20:12:25.184328 2684 kubelet.go:480] "Attempting to sync node with API server" Apr 13 20:12:25.184690 kubelet[2684]: I0413 20:12:25.184358 2684 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 20:12:25.184690 kubelet[2684]: I0413 20:12:25.184406 2684 kubelet.go:386] "Adding apiserver pod source" Apr 13 20:12:25.184690 kubelet[2684]: I0413 20:12:25.184437 2684 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 20:12:25.189914 kubelet[2684]: I0413 20:12:25.189864 2684 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 20:12:25.191636 kubelet[2684]: I0413 20:12:25.190582 2684 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 20:12:25.205169 kubelet[2684]: I0413 20:12:25.203460 2684 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 13 20:12:25.205169 kubelet[2684]: I0413 20:12:25.203553 2684 server.go:1289] "Started kubelet" Apr 13 20:12:25.208642 kubelet[2684]: I0413 20:12:25.208615 2684 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 20:12:25.218566 kubelet[2684]: I0413 20:12:25.218373 2684 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 20:12:25.227301 kubelet[2684]: I0413 20:12:25.219301 2684 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 13 20:12:25.229592 kubelet[2684]: I0413 20:12:25.219321 2684 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 13 20:12:25.229689 kubelet[2684]: I0413 20:12:25.220110 2684 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 20:12:25.234262 kubelet[2684]: I0413 20:12:25.234232 2684 factory.go:223] Registration of the systemd container factory successfully Apr 13 20:12:25.234559 kubelet[2684]: I0413 20:12:25.234528 2684 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 20:12:25.236011 kubelet[2684]: I0413 20:12:25.224687 2684 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 20:12:25.241879 kubelet[2684]: I0413 20:12:25.241852 2684 server.go:317] "Adding debug handlers to kubelet server" Apr 13 20:12:25.245891 kubelet[2684]: E0413 20:12:25.222405 2684 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-pcqx3.gb1.brightbox.com\" not found" Apr 13 20:12:25.253942 kubelet[2684]: I0413 20:12:25.237984 2684 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 20:12:25.253942 kubelet[2684]: I0413 20:12:25.237191 2684 reconciler.go:26] "Reconciler: start to sync state" Apr 13 20:12:25.256990 kubelet[2684]: I0413 20:12:25.256665 2684 factory.go:223] Registration of the containerd container factory successfully Apr 13 20:12:25.270542 kubelet[2684]: I0413 20:12:25.269618 2684 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 13 20:12:25.280230 kubelet[2684]: E0413 20:12:25.278552 2684 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 20:12:25.286500 sudo[2705]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 13 20:12:25.287133 sudo[2705]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 13 20:12:25.328075 kubelet[2684]: I0413 20:12:25.328035 2684 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 13 20:12:25.328322 kubelet[2684]: I0413 20:12:25.328088 2684 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 13 20:12:25.328322 kubelet[2684]: I0413 20:12:25.328125 2684 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 20:12:25.328322 kubelet[2684]: I0413 20:12:25.328138 2684 kubelet.go:2436] "Starting kubelet main sync loop" Apr 13 20:12:25.328322 kubelet[2684]: E0413 20:12:25.328199 2684 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 20:12:25.380170 kubelet[2684]: I0413 20:12:25.377782 2684 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 20:12:25.380170 kubelet[2684]: I0413 20:12:25.377811 2684 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 20:12:25.380170 kubelet[2684]: I0413 20:12:25.377846 2684 state_mem.go:36] "Initialized new in-memory state store" Apr 13 20:12:25.380170 kubelet[2684]: I0413 20:12:25.378726 2684 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 13 20:12:25.380170 kubelet[2684]: I0413 20:12:25.378747 2684 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 13 20:12:25.380170 kubelet[2684]: I0413 20:12:25.378782 2684 policy_none.go:49] "None policy: Start" Apr 13 20:12:25.380170 kubelet[2684]: I0413 20:12:25.378815 2684 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 13 20:12:25.380170 kubelet[2684]: I0413 20:12:25.378849 2684 state_mem.go:35] "Initializing new in-memory state store" Apr 13 20:12:25.380170 kubelet[2684]: I0413 20:12:25.379063 2684 state_mem.go:75] "Updated machine memory state" Apr 13 20:12:25.386526 kubelet[2684]: E0413 20:12:25.386288 2684 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 20:12:25.389290 kubelet[2684]: I0413 20:12:25.388590 2684 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 20:12:25.389594 kubelet[2684]: I0413 20:12:25.389542 2684 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 20:12:25.392288 kubelet[2684]: I0413 20:12:25.390092 2684 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 20:12:25.395689 kubelet[2684]: E0413 20:12:25.394317 2684 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 20:12:25.430093 kubelet[2684]: I0413 20:12:25.430006 2684 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:25.431966 kubelet[2684]: I0413 20:12:25.431945 2684 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:25.432208 kubelet[2684]: I0413 20:12:25.432187 2684 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:25.446762 kubelet[2684]: I0413 20:12:25.446720 2684 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 13 20:12:25.447357 kubelet[2684]: I0413 20:12:25.446997 2684 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 13 20:12:25.447357 kubelet[2684]: E0413 20:12:25.447064 2684 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-pcqx3.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:25.451816 kubelet[2684]: I0413 20:12:25.451771 2684 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 13 20:12:25.455344 kubelet[2684]: I0413 20:12:25.455223 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4a7e5fe162a9fe0e620b53877ef0203a-ca-certs\") pod \"kube-controller-manager-srv-pcqx3.gb1.brightbox.com\" (UID: \"4a7e5fe162a9fe0e620b53877ef0203a\") " pod="kube-system/kube-controller-manager-srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:25.455344 kubelet[2684]: I0413 20:12:25.455294 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4a7e5fe162a9fe0e620b53877ef0203a-flexvolume-dir\") pod \"kube-controller-manager-srv-pcqx3.gb1.brightbox.com\" (UID: \"4a7e5fe162a9fe0e620b53877ef0203a\") " pod="kube-system/kube-controller-manager-srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:25.455699 kubelet[2684]: I0413 20:12:25.455537 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4a7e5fe162a9fe0e620b53877ef0203a-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-pcqx3.gb1.brightbox.com\" (UID: \"4a7e5fe162a9fe0e620b53877ef0203a\") " pod="kube-system/kube-controller-manager-srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:25.455988 kubelet[2684]: I0413 20:12:25.455643 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/65e23e433bf3ab6bbc3c3327e093e0a7-k8s-certs\") pod \"kube-apiserver-srv-pcqx3.gb1.brightbox.com\" (UID: \"65e23e433bf3ab6bbc3c3327e093e0a7\") " pod="kube-system/kube-apiserver-srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:25.456230 kubelet[2684]: I0413 20:12:25.456097 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/65e23e433bf3ab6bbc3c3327e093e0a7-usr-share-ca-certificates\") pod \"kube-apiserver-srv-pcqx3.gb1.brightbox.com\" (UID: \"65e23e433bf3ab6bbc3c3327e093e0a7\") " pod="kube-system/kube-apiserver-srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:25.456230 kubelet[2684]: I0413 20:12:25.456186 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4a7e5fe162a9fe0e620b53877ef0203a-k8s-certs\") pod \"kube-controller-manager-srv-pcqx3.gb1.brightbox.com\" (UID: \"4a7e5fe162a9fe0e620b53877ef0203a\") " pod="kube-system/kube-controller-manager-srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:25.456623 kubelet[2684]: I0413 20:12:25.456409 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4a7e5fe162a9fe0e620b53877ef0203a-kubeconfig\") pod \"kube-controller-manager-srv-pcqx3.gb1.brightbox.com\" (UID: \"4a7e5fe162a9fe0e620b53877ef0203a\") " pod="kube-system/kube-controller-manager-srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:25.456623 kubelet[2684]: I0413 20:12:25.456505 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3636124157495b5512126e5837236483-kubeconfig\") pod \"kube-scheduler-srv-pcqx3.gb1.brightbox.com\" (UID: \"3636124157495b5512126e5837236483\") " pod="kube-system/kube-scheduler-srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:25.456623 kubelet[2684]: I0413 20:12:25.456582 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/65e23e433bf3ab6bbc3c3327e093e0a7-ca-certs\") pod \"kube-apiserver-srv-pcqx3.gb1.brightbox.com\" (UID: \"65e23e433bf3ab6bbc3c3327e093e0a7\") " pod="kube-system/kube-apiserver-srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:25.522570 kubelet[2684]: I0413 20:12:25.520558 2684 kubelet_node_status.go:75] "Attempting to register node" node="srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:25.537622 kubelet[2684]: I0413 20:12:25.537483 2684 kubelet_node_status.go:124] "Node was previously registered" node="srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:25.537622 kubelet[2684]: I0413 20:12:25.537598 2684 kubelet_node_status.go:78] "Successfully registered node" node="srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:26.126746 sudo[2705]: pam_unix(sudo:session): session closed for user root Apr 13 20:12:26.189466 kubelet[2684]: I0413 20:12:26.189404 2684 apiserver.go:52] "Watching apiserver" Apr 13 20:12:26.229985 kubelet[2684]: I0413 20:12:26.229905 2684 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 13 20:12:26.354499 kubelet[2684]: I0413 20:12:26.352730 2684 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:26.362501 kubelet[2684]: I0413 20:12:26.362199 2684 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 13 20:12:26.362501 kubelet[2684]: E0413 20:12:26.362278 2684 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-pcqx3.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-pcqx3.gb1.brightbox.com" Apr 13 20:12:26.412026 kubelet[2684]: I0413 20:12:26.410570 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-pcqx3.gb1.brightbox.com" podStartSLOduration=1.410541398 podStartE2EDuration="1.410541398s" podCreationTimestamp="2026-04-13 20:12:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:12:26.396038147 +0000 UTC m=+1.332653479" watchObservedRunningTime="2026-04-13 20:12:26.410541398 +0000 UTC m=+1.347156722" Apr 13 20:12:26.425840 kubelet[2684]: I0413 20:12:26.425269 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-pcqx3.gb1.brightbox.com" podStartSLOduration=3.425256068 podStartE2EDuration="3.425256068s" podCreationTimestamp="2026-04-13 20:12:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:12:26.411098511 +0000 UTC m=+1.347713823" watchObservedRunningTime="2026-04-13 20:12:26.425256068 +0000 UTC m=+1.361871382" Apr 13 20:12:26.439615 kubelet[2684]: I0413 20:12:26.439434 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-pcqx3.gb1.brightbox.com" podStartSLOduration=1.439416325 podStartE2EDuration="1.439416325s" podCreationTimestamp="2026-04-13 20:12:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:12:26.426255066 +0000 UTC m=+1.362870394" watchObservedRunningTime="2026-04-13 20:12:26.439416325 +0000 UTC m=+1.376031650" Apr 13 20:12:28.296307 sudo[1761]: pam_unix(sudo:session): session closed for user root Apr 13 20:12:28.315828 sshd[1758]: pam_unix(sshd:session): session closed for user core Apr 13 20:12:28.323918 systemd-logind[1487]: Session 11 logged out. Waiting for processes to exit. Apr 13 20:12:28.325246 systemd[1]: sshd@8-10.244.14.202:22-4.175.71.9:56248.service: Deactivated successfully. Apr 13 20:12:28.329437 systemd[1]: session-11.scope: Deactivated successfully. Apr 13 20:12:28.330176 systemd[1]: session-11.scope: Consumed 7.971s CPU time, 144.9M memory peak, 0B memory swap peak. Apr 13 20:12:28.334016 systemd-logind[1487]: Removed session 11. Apr 13 20:12:30.615750 kubelet[2684]: I0413 20:12:30.615291 2684 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 13 20:12:30.617099 kubelet[2684]: I0413 20:12:30.616853 2684 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 13 20:12:30.617197 containerd[1505]: time="2026-04-13T20:12:30.616323749Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 13 20:12:31.161105 systemd[1]: Created slice kubepods-besteffort-pod2ab63d07_609b_4811_9b79_ff4cde72c9eb.slice - libcontainer container kubepods-besteffort-pod2ab63d07_609b_4811_9b79_ff4cde72c9eb.slice. Apr 13 20:12:31.185942 systemd[1]: Created slice kubepods-burstable-podd7658e91_f05e_4ffb_b887_48a8f6089db3.slice - libcontainer container kubepods-burstable-podd7658e91_f05e_4ffb_b887_48a8f6089db3.slice. Apr 13 20:12:31.196066 kubelet[2684]: I0413 20:12:31.195990 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d7658e91-f05e-4ffb-b887-48a8f6089db3-cilium-run\") pod \"cilium-6nc56\" (UID: \"d7658e91-f05e-4ffb-b887-48a8f6089db3\") " pod="kube-system/cilium-6nc56" Apr 13 20:12:31.196066 kubelet[2684]: I0413 20:12:31.196061 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d7658e91-f05e-4ffb-b887-48a8f6089db3-bpf-maps\") pod \"cilium-6nc56\" (UID: \"d7658e91-f05e-4ffb-b887-48a8f6089db3\") " pod="kube-system/cilium-6nc56" Apr 13 20:12:31.196962 kubelet[2684]: I0413 20:12:31.196094 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d7658e91-f05e-4ffb-b887-48a8f6089db3-lib-modules\") pod \"cilium-6nc56\" (UID: \"d7658e91-f05e-4ffb-b887-48a8f6089db3\") " pod="kube-system/cilium-6nc56" Apr 13 20:12:31.196962 kubelet[2684]: I0413 20:12:31.196121 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d7658e91-f05e-4ffb-b887-48a8f6089db3-clustermesh-secrets\") pod \"cilium-6nc56\" (UID: \"d7658e91-f05e-4ffb-b887-48a8f6089db3\") " pod="kube-system/cilium-6nc56" Apr 13 20:12:31.196962 kubelet[2684]: I0413 20:12:31.196149 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt5mb\" (UniqueName: \"kubernetes.io/projected/d7658e91-f05e-4ffb-b887-48a8f6089db3-kube-api-access-rt5mb\") pod \"cilium-6nc56\" (UID: \"d7658e91-f05e-4ffb-b887-48a8f6089db3\") " pod="kube-system/cilium-6nc56" Apr 13 20:12:31.196962 kubelet[2684]: I0413 20:12:31.196187 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d7658e91-f05e-4ffb-b887-48a8f6089db3-xtables-lock\") pod \"cilium-6nc56\" (UID: \"d7658e91-f05e-4ffb-b887-48a8f6089db3\") " pod="kube-system/cilium-6nc56" Apr 13 20:12:31.196962 kubelet[2684]: I0413 20:12:31.196213 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d7658e91-f05e-4ffb-b887-48a8f6089db3-host-proc-sys-kernel\") pod \"cilium-6nc56\" (UID: \"d7658e91-f05e-4ffb-b887-48a8f6089db3\") " pod="kube-system/cilium-6nc56" Apr 13 20:12:31.196962 kubelet[2684]: I0413 20:12:31.196251 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d7658e91-f05e-4ffb-b887-48a8f6089db3-hubble-tls\") pod \"cilium-6nc56\" (UID: \"d7658e91-f05e-4ffb-b887-48a8f6089db3\") " pod="kube-system/cilium-6nc56" Apr 13 20:12:31.197824 kubelet[2684]: I0413 20:12:31.196306 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d7658e91-f05e-4ffb-b887-48a8f6089db3-hostproc\") pod \"cilium-6nc56\" (UID: \"d7658e91-f05e-4ffb-b887-48a8f6089db3\") " pod="kube-system/cilium-6nc56" Apr 13 20:12:31.197824 kubelet[2684]: I0413 20:12:31.196340 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d7658e91-f05e-4ffb-b887-48a8f6089db3-cilium-cgroup\") pod \"cilium-6nc56\" (UID: \"d7658e91-f05e-4ffb-b887-48a8f6089db3\") " pod="kube-system/cilium-6nc56" Apr 13 20:12:31.197824 kubelet[2684]: I0413 20:12:31.196372 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d7658e91-f05e-4ffb-b887-48a8f6089db3-cni-path\") pod \"cilium-6nc56\" (UID: \"d7658e91-f05e-4ffb-b887-48a8f6089db3\") " pod="kube-system/cilium-6nc56" Apr 13 20:12:31.197824 kubelet[2684]: I0413 20:12:31.196401 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d7658e91-f05e-4ffb-b887-48a8f6089db3-cilium-config-path\") pod \"cilium-6nc56\" (UID: \"d7658e91-f05e-4ffb-b887-48a8f6089db3\") " pod="kube-system/cilium-6nc56" Apr 13 20:12:31.197824 kubelet[2684]: I0413 20:12:31.196429 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d7658e91-f05e-4ffb-b887-48a8f6089db3-host-proc-sys-net\") pod \"cilium-6nc56\" (UID: \"d7658e91-f05e-4ffb-b887-48a8f6089db3\") " pod="kube-system/cilium-6nc56" Apr 13 20:12:31.197824 kubelet[2684]: I0413 20:12:31.196455 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5vcl\" (UniqueName: \"kubernetes.io/projected/2ab63d07-609b-4811-9b79-ff4cde72c9eb-kube-api-access-f5vcl\") pod \"kube-proxy-lptj5\" (UID: \"2ab63d07-609b-4811-9b79-ff4cde72c9eb\") " pod="kube-system/kube-proxy-lptj5" Apr 13 20:12:31.198129 kubelet[2684]: I0413 20:12:31.196508 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2ab63d07-609b-4811-9b79-ff4cde72c9eb-kube-proxy\") pod \"kube-proxy-lptj5\" (UID: \"2ab63d07-609b-4811-9b79-ff4cde72c9eb\") " pod="kube-system/kube-proxy-lptj5" Apr 13 20:12:31.198129 kubelet[2684]: I0413 20:12:31.196561 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2ab63d07-609b-4811-9b79-ff4cde72c9eb-xtables-lock\") pod \"kube-proxy-lptj5\" (UID: \"2ab63d07-609b-4811-9b79-ff4cde72c9eb\") " pod="kube-system/kube-proxy-lptj5" Apr 13 20:12:31.198129 kubelet[2684]: I0413 20:12:31.196599 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2ab63d07-609b-4811-9b79-ff4cde72c9eb-lib-modules\") pod \"kube-proxy-lptj5\" (UID: \"2ab63d07-609b-4811-9b79-ff4cde72c9eb\") " pod="kube-system/kube-proxy-lptj5" Apr 13 20:12:31.198129 kubelet[2684]: I0413 20:12:31.196638 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d7658e91-f05e-4ffb-b887-48a8f6089db3-etc-cni-netd\") pod \"cilium-6nc56\" (UID: \"d7658e91-f05e-4ffb-b887-48a8f6089db3\") " pod="kube-system/cilium-6nc56" Apr 13 20:12:31.343314 kubelet[2684]: E0413 20:12:31.343261 2684 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 13 20:12:31.343508 kubelet[2684]: E0413 20:12:31.343346 2684 projected.go:194] Error preparing data for projected volume kube-api-access-f5vcl for pod kube-system/kube-proxy-lptj5: configmap "kube-root-ca.crt" not found Apr 13 20:12:31.344816 kubelet[2684]: E0413 20:12:31.344783 2684 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 13 20:12:31.344938 kubelet[2684]: E0413 20:12:31.344812 2684 projected.go:194] Error preparing data for projected volume kube-api-access-rt5mb for pod kube-system/cilium-6nc56: configmap "kube-root-ca.crt" not found Apr 13 20:12:31.345016 kubelet[2684]: E0413 20:12:31.344849 2684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2ab63d07-609b-4811-9b79-ff4cde72c9eb-kube-api-access-f5vcl podName:2ab63d07-609b-4811-9b79-ff4cde72c9eb nodeName:}" failed. No retries permitted until 2026-04-13 20:12:31.844602948 +0000 UTC m=+6.781218267 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-f5vcl" (UniqueName: "kubernetes.io/projected/2ab63d07-609b-4811-9b79-ff4cde72c9eb-kube-api-access-f5vcl") pod "kube-proxy-lptj5" (UID: "2ab63d07-609b-4811-9b79-ff4cde72c9eb") : configmap "kube-root-ca.crt" not found Apr 13 20:12:31.345211 kubelet[2684]: E0413 20:12:31.345162 2684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d7658e91-f05e-4ffb-b887-48a8f6089db3-kube-api-access-rt5mb podName:d7658e91-f05e-4ffb-b887-48a8f6089db3 nodeName:}" failed. No retries permitted until 2026-04-13 20:12:31.844957796 +0000 UTC m=+6.781573108 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rt5mb" (UniqueName: "kubernetes.io/projected/d7658e91-f05e-4ffb-b887-48a8f6089db3-kube-api-access-rt5mb") pod "cilium-6nc56" (UID: "d7658e91-f05e-4ffb-b887-48a8f6089db3") : configmap "kube-root-ca.crt" not found Apr 13 20:12:31.821344 systemd[1]: Created slice kubepods-besteffort-pod04216b05_b39a_4b02_82dd_60f52e548622.slice - libcontainer container kubepods-besteffort-pod04216b05_b39a_4b02_82dd_60f52e548622.slice. Apr 13 20:12:31.907500 kubelet[2684]: I0413 20:12:31.905381 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/04216b05-b39a-4b02-82dd-60f52e548622-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-zkj5q\" (UID: \"04216b05-b39a-4b02-82dd-60f52e548622\") " pod="kube-system/cilium-operator-6c4d7847fc-zkj5q" Apr 13 20:12:31.907500 kubelet[2684]: I0413 20:12:31.905450 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv7q6\" (UniqueName: \"kubernetes.io/projected/04216b05-b39a-4b02-82dd-60f52e548622-kube-api-access-hv7q6\") pod \"cilium-operator-6c4d7847fc-zkj5q\" (UID: \"04216b05-b39a-4b02-82dd-60f52e548622\") " pod="kube-system/cilium-operator-6c4d7847fc-zkj5q" Apr 13 20:12:32.082929 containerd[1505]: time="2026-04-13T20:12:32.082740263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lptj5,Uid:2ab63d07-609b-4811-9b79-ff4cde72c9eb,Namespace:kube-system,Attempt:0,}" Apr 13 20:12:32.092034 containerd[1505]: time="2026-04-13T20:12:32.091667900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6nc56,Uid:d7658e91-f05e-4ffb-b887-48a8f6089db3,Namespace:kube-system,Attempt:0,}" Apr 13 20:12:32.127804 containerd[1505]: time="2026-04-13T20:12:32.127327514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-zkj5q,Uid:04216b05-b39a-4b02-82dd-60f52e548622,Namespace:kube-system,Attempt:0,}" Apr 13 20:12:32.136264 containerd[1505]: time="2026-04-13T20:12:32.136130990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:12:32.138841 containerd[1505]: time="2026-04-13T20:12:32.138555595Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:12:32.138841 containerd[1505]: time="2026-04-13T20:12:32.138589824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:12:32.138841 containerd[1505]: time="2026-04-13T20:12:32.138756485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:12:32.144281 containerd[1505]: time="2026-04-13T20:12:32.144176808Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:12:32.144433 containerd[1505]: time="2026-04-13T20:12:32.144242506Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:12:32.144433 containerd[1505]: time="2026-04-13T20:12:32.144286812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:12:32.144789 containerd[1505]: time="2026-04-13T20:12:32.144420620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:12:32.173735 systemd[1]: Started cri-containerd-5d52afba6a2ed50a161a074ee2f4d8263d7126395e2fd5188e10f46838112f45.scope - libcontainer container 5d52afba6a2ed50a161a074ee2f4d8263d7126395e2fd5188e10f46838112f45. Apr 13 20:12:32.189622 systemd[1]: Started cri-containerd-7f7a069b2342f5e12a97a1c77f1f49bbd142b41b51373ab2883f4e132492d942.scope - libcontainer container 7f7a069b2342f5e12a97a1c77f1f49bbd142b41b51373ab2883f4e132492d942. Apr 13 20:12:32.223537 containerd[1505]: time="2026-04-13T20:12:32.222240208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:12:32.223537 containerd[1505]: time="2026-04-13T20:12:32.222316121Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:12:32.223537 containerd[1505]: time="2026-04-13T20:12:32.222346011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:12:32.223537 containerd[1505]: time="2026-04-13T20:12:32.222534508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:12:32.258163 containerd[1505]: time="2026-04-13T20:12:32.258105907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6nc56,Uid:d7658e91-f05e-4ffb-b887-48a8f6089db3,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d52afba6a2ed50a161a074ee2f4d8263d7126395e2fd5188e10f46838112f45\"" Apr 13 20:12:32.264765 containerd[1505]: time="2026-04-13T20:12:32.264716197Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 13 20:12:32.275792 containerd[1505]: time="2026-04-13T20:12:32.275630779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lptj5,Uid:2ab63d07-609b-4811-9b79-ff4cde72c9eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f7a069b2342f5e12a97a1c77f1f49bbd142b41b51373ab2883f4e132492d942\"" Apr 13 20:12:32.277713 systemd[1]: Started cri-containerd-8abf037fe15edb0adc10cf852339384ee54f46553ff91364734008c6cd2456f7.scope - libcontainer container 8abf037fe15edb0adc10cf852339384ee54f46553ff91364734008c6cd2456f7. Apr 13 20:12:32.288043 containerd[1505]: time="2026-04-13T20:12:32.287755886Z" level=info msg="CreateContainer within sandbox \"7f7a069b2342f5e12a97a1c77f1f49bbd142b41b51373ab2883f4e132492d942\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 13 20:12:32.325498 containerd[1505]: time="2026-04-13T20:12:32.323661495Z" level=info msg="CreateContainer within sandbox \"7f7a069b2342f5e12a97a1c77f1f49bbd142b41b51373ab2883f4e132492d942\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8902c7be3af524a7b0a4afb1aed7003b0c4217b73a93f80f354c494b1bf9892f\"" Apr 13 20:12:32.327436 containerd[1505]: time="2026-04-13T20:12:32.327247287Z" level=info msg="StartContainer for \"8902c7be3af524a7b0a4afb1aed7003b0c4217b73a93f80f354c494b1bf9892f\"" Apr 13 20:12:32.387987 containerd[1505]: time="2026-04-13T20:12:32.387818128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-zkj5q,Uid:04216b05-b39a-4b02-82dd-60f52e548622,Namespace:kube-system,Attempt:0,} returns sandbox id \"8abf037fe15edb0adc10cf852339384ee54f46553ff91364734008c6cd2456f7\"" Apr 13 20:12:32.392365 systemd[1]: run-containerd-runc-k8s.io-8902c7be3af524a7b0a4afb1aed7003b0c4217b73a93f80f354c494b1bf9892f-runc.8t2OmD.mount: Deactivated successfully. Apr 13 20:12:32.403901 systemd[1]: Started cri-containerd-8902c7be3af524a7b0a4afb1aed7003b0c4217b73a93f80f354c494b1bf9892f.scope - libcontainer container 8902c7be3af524a7b0a4afb1aed7003b0c4217b73a93f80f354c494b1bf9892f. Apr 13 20:12:32.450797 containerd[1505]: time="2026-04-13T20:12:32.450728956Z" level=info msg="StartContainer for \"8902c7be3af524a7b0a4afb1aed7003b0c4217b73a93f80f354c494b1bf9892f\" returns successfully" Apr 13 20:12:33.399160 kubelet[2684]: I0413 20:12:33.398707 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lptj5" podStartSLOduration=2.398624785 podStartE2EDuration="2.398624785s" podCreationTimestamp="2026-04-13 20:12:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:12:33.396909439 +0000 UTC m=+8.333524759" watchObservedRunningTime="2026-04-13 20:12:33.398624785 +0000 UTC m=+8.335240106" Apr 13 20:12:41.569999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2502094725.mount: Deactivated successfully. Apr 13 20:12:44.900858 containerd[1505]: time="2026-04-13T20:12:44.900725607Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:12:44.903500 containerd[1505]: time="2026-04-13T20:12:44.903401140Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 13 20:12:44.904649 containerd[1505]: time="2026-04-13T20:12:44.904584910Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:12:44.907312 containerd[1505]: time="2026-04-13T20:12:44.907098966Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.642315122s" Apr 13 20:12:44.907312 containerd[1505]: time="2026-04-13T20:12:44.907161752Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 13 20:12:44.920097 containerd[1505]: time="2026-04-13T20:12:44.919523976Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 13 20:12:44.928042 containerd[1505]: time="2026-04-13T20:12:44.927399814Z" level=info msg="CreateContainer within sandbox \"5d52afba6a2ed50a161a074ee2f4d8263d7126395e2fd5188e10f46838112f45\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 13 20:12:45.075837 containerd[1505]: time="2026-04-13T20:12:45.075765754Z" level=info msg="CreateContainer within sandbox \"5d52afba6a2ed50a161a074ee2f4d8263d7126395e2fd5188e10f46838112f45\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a4162b0349c514c1251aff98b48b0adcf3990a7a93574590340fb2f356540a99\"" Apr 13 20:12:45.076415 containerd[1505]: time="2026-04-13T20:12:45.076373369Z" level=info msg="StartContainer for \"a4162b0349c514c1251aff98b48b0adcf3990a7a93574590340fb2f356540a99\"" Apr 13 20:12:45.318704 systemd[1]: Started cri-containerd-a4162b0349c514c1251aff98b48b0adcf3990a7a93574590340fb2f356540a99.scope - libcontainer container a4162b0349c514c1251aff98b48b0adcf3990a7a93574590340fb2f356540a99. Apr 13 20:12:45.363689 containerd[1505]: time="2026-04-13T20:12:45.363540109Z" level=info msg="StartContainer for \"a4162b0349c514c1251aff98b48b0adcf3990a7a93574590340fb2f356540a99\" returns successfully" Apr 13 20:12:45.381949 systemd[1]: cri-containerd-a4162b0349c514c1251aff98b48b0adcf3990a7a93574590340fb2f356540a99.scope: Deactivated successfully. Apr 13 20:12:45.473338 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a4162b0349c514c1251aff98b48b0adcf3990a7a93574590340fb2f356540a99-rootfs.mount: Deactivated successfully. Apr 13 20:12:45.510445 containerd[1505]: time="2026-04-13T20:12:45.492526794Z" level=info msg="shim disconnected" id=a4162b0349c514c1251aff98b48b0adcf3990a7a93574590340fb2f356540a99 namespace=k8s.io Apr 13 20:12:45.510855 containerd[1505]: time="2026-04-13T20:12:45.510811984Z" level=warning msg="cleaning up after shim disconnected" id=a4162b0349c514c1251aff98b48b0adcf3990a7a93574590340fb2f356540a99 namespace=k8s.io Apr 13 20:12:45.510986 containerd[1505]: time="2026-04-13T20:12:45.510960154Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:12:46.444689 containerd[1505]: time="2026-04-13T20:12:46.444296279Z" level=info msg="CreateContainer within sandbox \"5d52afba6a2ed50a161a074ee2f4d8263d7126395e2fd5188e10f46838112f45\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 13 20:12:46.479244 containerd[1505]: time="2026-04-13T20:12:46.477561538Z" level=info msg="CreateContainer within sandbox \"5d52afba6a2ed50a161a074ee2f4d8263d7126395e2fd5188e10f46838112f45\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4dd764fcda1928dac2cd0ec5caca6497df3d4ede1c476f9ecea9d46d7cc41eb4\"" Apr 13 20:12:46.479244 containerd[1505]: time="2026-04-13T20:12:46.478426170Z" level=info msg="StartContainer for \"4dd764fcda1928dac2cd0ec5caca6497df3d4ede1c476f9ecea9d46d7cc41eb4\"" Apr 13 20:12:46.571801 systemd[1]: Started cri-containerd-4dd764fcda1928dac2cd0ec5caca6497df3d4ede1c476f9ecea9d46d7cc41eb4.scope - libcontainer container 4dd764fcda1928dac2cd0ec5caca6497df3d4ede1c476f9ecea9d46d7cc41eb4. Apr 13 20:12:46.666219 containerd[1505]: time="2026-04-13T20:12:46.666061333Z" level=info msg="StartContainer for \"4dd764fcda1928dac2cd0ec5caca6497df3d4ede1c476f9ecea9d46d7cc41eb4\" returns successfully" Apr 13 20:12:46.688881 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 20:12:46.689323 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:12:46.689656 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 13 20:12:46.699272 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 20:12:46.701529 systemd[1]: cri-containerd-4dd764fcda1928dac2cd0ec5caca6497df3d4ede1c476f9ecea9d46d7cc41eb4.scope: Deactivated successfully. Apr 13 20:12:46.807219 containerd[1505]: time="2026-04-13T20:12:46.806399557Z" level=info msg="shim disconnected" id=4dd764fcda1928dac2cd0ec5caca6497df3d4ede1c476f9ecea9d46d7cc41eb4 namespace=k8s.io Apr 13 20:12:46.807219 containerd[1505]: time="2026-04-13T20:12:46.806657508Z" level=warning msg="cleaning up after shim disconnected" id=4dd764fcda1928dac2cd0ec5caca6497df3d4ede1c476f9ecea9d46d7cc41eb4 namespace=k8s.io Apr 13 20:12:46.807219 containerd[1505]: time="2026-04-13T20:12:46.806676407Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:12:46.810936 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:12:46.845363 containerd[1505]: time="2026-04-13T20:12:46.845291248Z" level=warning msg="cleanup warnings time=\"2026-04-13T20:12:46Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 13 20:12:47.454789 containerd[1505]: time="2026-04-13T20:12:47.454687164Z" level=info msg="CreateContainer within sandbox \"5d52afba6a2ed50a161a074ee2f4d8263d7126395e2fd5188e10f46838112f45\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 13 20:12:47.467801 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1884193094.mount: Deactivated successfully. Apr 13 20:12:47.468024 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4dd764fcda1928dac2cd0ec5caca6497df3d4ede1c476f9ecea9d46d7cc41eb4-rootfs.mount: Deactivated successfully. Apr 13 20:12:47.487817 containerd[1505]: time="2026-04-13T20:12:47.487750042Z" level=info msg="CreateContainer within sandbox \"5d52afba6a2ed50a161a074ee2f4d8263d7126395e2fd5188e10f46838112f45\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fc260800cbac89b3b02e723f5a6c6a30828e6b229eab3c627d0be4237c28ffdd\"" Apr 13 20:12:47.491718 containerd[1505]: time="2026-04-13T20:12:47.491663885Z" level=info msg="StartContainer for \"fc260800cbac89b3b02e723f5a6c6a30828e6b229eab3c627d0be4237c28ffdd\"" Apr 13 20:12:47.566762 systemd[1]: Started cri-containerd-fc260800cbac89b3b02e723f5a6c6a30828e6b229eab3c627d0be4237c28ffdd.scope - libcontainer container fc260800cbac89b3b02e723f5a6c6a30828e6b229eab3c627d0be4237c28ffdd. Apr 13 20:12:47.640720 containerd[1505]: time="2026-04-13T20:12:47.640577713Z" level=info msg="StartContainer for \"fc260800cbac89b3b02e723f5a6c6a30828e6b229eab3c627d0be4237c28ffdd\" returns successfully" Apr 13 20:12:47.654370 systemd[1]: cri-containerd-fc260800cbac89b3b02e723f5a6c6a30828e6b229eab3c627d0be4237c28ffdd.scope: Deactivated successfully. Apr 13 20:12:47.683447 containerd[1505]: time="2026-04-13T20:12:47.683394384Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:12:47.684666 containerd[1505]: time="2026-04-13T20:12:47.684616132Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 13 20:12:47.685844 containerd[1505]: time="2026-04-13T20:12:47.685443138Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:12:47.689980 containerd[1505]: time="2026-04-13T20:12:47.689425211Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.769844991s" Apr 13 20:12:47.689980 containerd[1505]: time="2026-04-13T20:12:47.689499113Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 13 20:12:47.698944 containerd[1505]: time="2026-04-13T20:12:47.698341539Z" level=info msg="CreateContainer within sandbox \"8abf037fe15edb0adc10cf852339384ee54f46553ff91364734008c6cd2456f7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 13 20:12:47.700454 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc260800cbac89b3b02e723f5a6c6a30828e6b229eab3c627d0be4237c28ffdd-rootfs.mount: Deactivated successfully. Apr 13 20:12:47.803289 containerd[1505]: time="2026-04-13T20:12:47.803190400Z" level=info msg="shim disconnected" id=fc260800cbac89b3b02e723f5a6c6a30828e6b229eab3c627d0be4237c28ffdd namespace=k8s.io Apr 13 20:12:47.803289 containerd[1505]: time="2026-04-13T20:12:47.803252712Z" level=warning msg="cleaning up after shim disconnected" id=fc260800cbac89b3b02e723f5a6c6a30828e6b229eab3c627d0be4237c28ffdd namespace=k8s.io Apr 13 20:12:47.803289 containerd[1505]: time="2026-04-13T20:12:47.803267917Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:12:47.814451 containerd[1505]: time="2026-04-13T20:12:47.814310241Z" level=info msg="CreateContainer within sandbox \"8abf037fe15edb0adc10cf852339384ee54f46553ff91364734008c6cd2456f7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f13851e76c0188bcf471f5de70254c4db69bbb90943a1bac222d4b1f693f5559\"" Apr 13 20:12:47.816566 containerd[1505]: time="2026-04-13T20:12:47.815731008Z" level=info msg="StartContainer for \"f13851e76c0188bcf471f5de70254c4db69bbb90943a1bac222d4b1f693f5559\"" Apr 13 20:12:47.864708 systemd[1]: Started cri-containerd-f13851e76c0188bcf471f5de70254c4db69bbb90943a1bac222d4b1f693f5559.scope - libcontainer container f13851e76c0188bcf471f5de70254c4db69bbb90943a1bac222d4b1f693f5559. Apr 13 20:12:47.904778 containerd[1505]: time="2026-04-13T20:12:47.904687141Z" level=info msg="StartContainer for \"f13851e76c0188bcf471f5de70254c4db69bbb90943a1bac222d4b1f693f5559\" returns successfully" Apr 13 20:12:48.470276 containerd[1505]: time="2026-04-13T20:12:48.470198967Z" level=info msg="CreateContainer within sandbox \"5d52afba6a2ed50a161a074ee2f4d8263d7126395e2fd5188e10f46838112f45\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 13 20:12:48.505402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount692871361.mount: Deactivated successfully. Apr 13 20:12:48.511907 containerd[1505]: time="2026-04-13T20:12:48.511830108Z" level=info msg="CreateContainer within sandbox \"5d52afba6a2ed50a161a074ee2f4d8263d7126395e2fd5188e10f46838112f45\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"99681b0bc23f723a935b73c31be2027348880798345510ddb70c1b131cf262f4\"" Apr 13 20:12:48.515955 containerd[1505]: time="2026-04-13T20:12:48.515233128Z" level=info msg="StartContainer for \"99681b0bc23f723a935b73c31be2027348880798345510ddb70c1b131cf262f4\"" Apr 13 20:12:48.596762 systemd[1]: Started cri-containerd-99681b0bc23f723a935b73c31be2027348880798345510ddb70c1b131cf262f4.scope - libcontainer container 99681b0bc23f723a935b73c31be2027348880798345510ddb70c1b131cf262f4. Apr 13 20:12:48.663324 containerd[1505]: time="2026-04-13T20:12:48.663273830Z" level=info msg="StartContainer for \"99681b0bc23f723a935b73c31be2027348880798345510ddb70c1b131cf262f4\" returns successfully" Apr 13 20:12:48.669046 systemd[1]: cri-containerd-99681b0bc23f723a935b73c31be2027348880798345510ddb70c1b131cf262f4.scope: Deactivated successfully. Apr 13 20:12:48.726584 containerd[1505]: time="2026-04-13T20:12:48.724318548Z" level=info msg="shim disconnected" id=99681b0bc23f723a935b73c31be2027348880798345510ddb70c1b131cf262f4 namespace=k8s.io Apr 13 20:12:48.726584 containerd[1505]: time="2026-04-13T20:12:48.724447666Z" level=warning msg="cleaning up after shim disconnected" id=99681b0bc23f723a935b73c31be2027348880798345510ddb70c1b131cf262f4 namespace=k8s.io Apr 13 20:12:48.726584 containerd[1505]: time="2026-04-13T20:12:48.724490177Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:12:49.467838 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99681b0bc23f723a935b73c31be2027348880798345510ddb70c1b131cf262f4-rootfs.mount: Deactivated successfully. Apr 13 20:12:49.475540 containerd[1505]: time="2026-04-13T20:12:49.475488040Z" level=info msg="CreateContainer within sandbox \"5d52afba6a2ed50a161a074ee2f4d8263d7126395e2fd5188e10f46838112f45\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 13 20:12:49.496991 kubelet[2684]: I0413 20:12:49.494956 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-zkj5q" podStartSLOduration=3.198137647 podStartE2EDuration="18.494444194s" podCreationTimestamp="2026-04-13 20:12:31 +0000 UTC" firstStartedPulling="2026-04-13 20:12:32.395716953 +0000 UTC m=+7.332332260" lastFinishedPulling="2026-04-13 20:12:47.69202348 +0000 UTC m=+22.628638807" observedRunningTime="2026-04-13 20:12:48.835730933 +0000 UTC m=+23.772346267" watchObservedRunningTime="2026-04-13 20:12:49.494444194 +0000 UTC m=+24.431059654" Apr 13 20:12:49.512505 containerd[1505]: time="2026-04-13T20:12:49.512437176Z" level=info msg="CreateContainer within sandbox \"5d52afba6a2ed50a161a074ee2f4d8263d7126395e2fd5188e10f46838112f45\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7dc35dc20a20619cbc118cdfa31bf42332aa40cc45afb92d5971d0991e9e0edd\"" Apr 13 20:12:49.513656 containerd[1505]: time="2026-04-13T20:12:49.513619710Z" level=info msg="StartContainer for \"7dc35dc20a20619cbc118cdfa31bf42332aa40cc45afb92d5971d0991e9e0edd\"" Apr 13 20:12:49.574722 systemd[1]: Started cri-containerd-7dc35dc20a20619cbc118cdfa31bf42332aa40cc45afb92d5971d0991e9e0edd.scope - libcontainer container 7dc35dc20a20619cbc118cdfa31bf42332aa40cc45afb92d5971d0991e9e0edd. Apr 13 20:12:49.626746 containerd[1505]: time="2026-04-13T20:12:49.626445232Z" level=info msg="StartContainer for \"7dc35dc20a20619cbc118cdfa31bf42332aa40cc45afb92d5971d0991e9e0edd\" returns successfully" Apr 13 20:12:49.875180 kubelet[2684]: I0413 20:12:49.875137 2684 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 13 20:12:49.947647 systemd[1]: Created slice kubepods-burstable-pod3407fb89_864a_4bc3_b222_526630d6ed82.slice - libcontainer container kubepods-burstable-pod3407fb89_864a_4bc3_b222_526630d6ed82.slice. Apr 13 20:12:49.961621 systemd[1]: Created slice kubepods-burstable-pod35dc1916_7bd9_4084_a41c_5170cde58e77.slice - libcontainer container kubepods-burstable-pod35dc1916_7bd9_4084_a41c_5170cde58e77.slice. Apr 13 20:12:50.055058 kubelet[2684]: I0413 20:12:50.054924 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3407fb89-864a-4bc3-b222-526630d6ed82-config-volume\") pod \"coredns-674b8bbfcf-hfqhq\" (UID: \"3407fb89-864a-4bc3-b222-526630d6ed82\") " pod="kube-system/coredns-674b8bbfcf-hfqhq" Apr 13 20:12:50.055262 kubelet[2684]: I0413 20:12:50.055091 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chbbc\" (UniqueName: \"kubernetes.io/projected/35dc1916-7bd9-4084-a41c-5170cde58e77-kube-api-access-chbbc\") pod \"coredns-674b8bbfcf-g82l4\" (UID: \"35dc1916-7bd9-4084-a41c-5170cde58e77\") " pod="kube-system/coredns-674b8bbfcf-g82l4" Apr 13 20:12:50.055262 kubelet[2684]: I0413 20:12:50.055197 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/35dc1916-7bd9-4084-a41c-5170cde58e77-config-volume\") pod \"coredns-674b8bbfcf-g82l4\" (UID: \"35dc1916-7bd9-4084-a41c-5170cde58e77\") " pod="kube-system/coredns-674b8bbfcf-g82l4" Apr 13 20:12:50.055398 kubelet[2684]: I0413 20:12:50.055274 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79d5f\" (UniqueName: \"kubernetes.io/projected/3407fb89-864a-4bc3-b222-526630d6ed82-kube-api-access-79d5f\") pod \"coredns-674b8bbfcf-hfqhq\" (UID: \"3407fb89-864a-4bc3-b222-526630d6ed82\") " pod="kube-system/coredns-674b8bbfcf-hfqhq" Apr 13 20:12:50.260037 containerd[1505]: time="2026-04-13T20:12:50.259830018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hfqhq,Uid:3407fb89-864a-4bc3-b222-526630d6ed82,Namespace:kube-system,Attempt:0,}" Apr 13 20:12:50.276100 containerd[1505]: time="2026-04-13T20:12:50.275605619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-g82l4,Uid:35dc1916-7bd9-4084-a41c-5170cde58e77,Namespace:kube-system,Attempt:0,}" Apr 13 20:12:50.514006 kubelet[2684]: I0413 20:12:50.512933 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6nc56" podStartSLOduration=6.864470213 podStartE2EDuration="19.512910517s" podCreationTimestamp="2026-04-13 20:12:31 +0000 UTC" firstStartedPulling="2026-04-13 20:12:32.260357598 +0000 UTC m=+7.196972911" lastFinishedPulling="2026-04-13 20:12:44.908797893 +0000 UTC m=+19.845413215" observedRunningTime="2026-04-13 20:12:50.509257315 +0000 UTC m=+25.445872661" watchObservedRunningTime="2026-04-13 20:12:50.512910517 +0000 UTC m=+25.449525835" Apr 13 20:12:52.600814 systemd-networkd[1414]: cilium_host: Link UP Apr 13 20:12:52.601402 systemd-networkd[1414]: cilium_net: Link UP Apr 13 20:12:52.601409 systemd-networkd[1414]: cilium_net: Gained carrier Apr 13 20:12:52.602692 systemd-networkd[1414]: cilium_host: Gained carrier Apr 13 20:12:52.603045 systemd-networkd[1414]: cilium_host: Gained IPv6LL Apr 13 20:12:52.656227 systemd-networkd[1414]: cilium_net: Gained IPv6LL Apr 13 20:12:52.779541 systemd-networkd[1414]: cilium_vxlan: Link UP Apr 13 20:12:52.779556 systemd-networkd[1414]: cilium_vxlan: Gained carrier Apr 13 20:12:53.362769 kernel: NET: Registered PF_ALG protocol family Apr 13 20:12:53.923858 systemd-networkd[1414]: cilium_vxlan: Gained IPv6LL Apr 13 20:12:54.435361 systemd-networkd[1414]: lxc_health: Link UP Apr 13 20:12:54.446642 systemd-networkd[1414]: lxc_health: Gained carrier Apr 13 20:12:54.895200 systemd-networkd[1414]: lxc662bf34418e1: Link UP Apr 13 20:12:54.902551 kernel: eth0: renamed from tmp16b54 Apr 13 20:12:54.907806 systemd-networkd[1414]: lxc662bf34418e1: Gained carrier Apr 13 20:12:54.918149 systemd-networkd[1414]: lxc18dc3f1c44b9: Link UP Apr 13 20:12:54.925533 kernel: eth0: renamed from tmp0583b Apr 13 20:12:54.943013 systemd-networkd[1414]: lxc18dc3f1c44b9: Gained carrier Apr 13 20:12:55.972705 systemd-networkd[1414]: lxc_health: Gained IPv6LL Apr 13 20:12:56.547949 systemd-networkd[1414]: lxc662bf34418e1: Gained IPv6LL Apr 13 20:12:56.803736 systemd-networkd[1414]: lxc18dc3f1c44b9: Gained IPv6LL Apr 13 20:13:00.809072 containerd[1505]: time="2026-04-13T20:13:00.808218150Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:13:00.809072 containerd[1505]: time="2026-04-13T20:13:00.808368019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:13:00.809072 containerd[1505]: time="2026-04-13T20:13:00.808392540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:13:00.810402 containerd[1505]: time="2026-04-13T20:13:00.809287647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:13:00.833525 containerd[1505]: time="2026-04-13T20:13:00.833008626Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:13:00.834665 containerd[1505]: time="2026-04-13T20:13:00.833118914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:13:00.835015 containerd[1505]: time="2026-04-13T20:13:00.834874787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:13:00.836511 containerd[1505]: time="2026-04-13T20:13:00.835935673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:13:00.902808 systemd[1]: Started cri-containerd-0583bcb63b00b5771424ba8f76b05fc45f7d31fb82c15a877cd900922f0dc56a.scope - libcontainer container 0583bcb63b00b5771424ba8f76b05fc45f7d31fb82c15a877cd900922f0dc56a. Apr 13 20:13:00.911808 systemd[1]: Started cri-containerd-16b545de7223fb9817c4c7b70ff53c6f55e9761ddb316492f992c2a5ba23f558.scope - libcontainer container 16b545de7223fb9817c4c7b70ff53c6f55e9761ddb316492f992c2a5ba23f558. Apr 13 20:13:01.025099 containerd[1505]: time="2026-04-13T20:13:01.024106989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-g82l4,Uid:35dc1916-7bd9-4084-a41c-5170cde58e77,Namespace:kube-system,Attempt:0,} returns sandbox id \"0583bcb63b00b5771424ba8f76b05fc45f7d31fb82c15a877cd900922f0dc56a\"" Apr 13 20:13:01.039877 containerd[1505]: time="2026-04-13T20:13:01.039024426Z" level=info msg="CreateContainer within sandbox \"0583bcb63b00b5771424ba8f76b05fc45f7d31fb82c15a877cd900922f0dc56a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 20:13:01.061765 containerd[1505]: time="2026-04-13T20:13:01.061150955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hfqhq,Uid:3407fb89-864a-4bc3-b222-526630d6ed82,Namespace:kube-system,Attempt:0,} returns sandbox id \"16b545de7223fb9817c4c7b70ff53c6f55e9761ddb316492f992c2a5ba23f558\"" Apr 13 20:13:01.077960 containerd[1505]: time="2026-04-13T20:13:01.077222165Z" level=info msg="CreateContainer within sandbox \"16b545de7223fb9817c4c7b70ff53c6f55e9761ddb316492f992c2a5ba23f558\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 20:13:01.090058 containerd[1505]: time="2026-04-13T20:13:01.089915012Z" level=info msg="CreateContainer within sandbox \"0583bcb63b00b5771424ba8f76b05fc45f7d31fb82c15a877cd900922f0dc56a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"117198024d3d76c20635cf9be429d7826011c0f5f2f5d058fe043955065ecaaa\"" Apr 13 20:13:01.091309 containerd[1505]: time="2026-04-13T20:13:01.090905998Z" level=info msg="StartContainer for \"117198024d3d76c20635cf9be429d7826011c0f5f2f5d058fe043955065ecaaa\"" Apr 13 20:13:01.104136 containerd[1505]: time="2026-04-13T20:13:01.104082904Z" level=info msg="CreateContainer within sandbox \"16b545de7223fb9817c4c7b70ff53c6f55e9761ddb316492f992c2a5ba23f558\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4b7f03eceab6fd46874937b64977ad54f89bdc80c83846cfd1324c26f837614c\"" Apr 13 20:13:01.106062 containerd[1505]: time="2026-04-13T20:13:01.105603933Z" level=info msg="StartContainer for \"4b7f03eceab6fd46874937b64977ad54f89bdc80c83846cfd1324c26f837614c\"" Apr 13 20:13:01.142711 systemd[1]: Started cri-containerd-117198024d3d76c20635cf9be429d7826011c0f5f2f5d058fe043955065ecaaa.scope - libcontainer container 117198024d3d76c20635cf9be429d7826011c0f5f2f5d058fe043955065ecaaa. Apr 13 20:13:01.159761 systemd[1]: Started cri-containerd-4b7f03eceab6fd46874937b64977ad54f89bdc80c83846cfd1324c26f837614c.scope - libcontainer container 4b7f03eceab6fd46874937b64977ad54f89bdc80c83846cfd1324c26f837614c. Apr 13 20:13:01.221106 containerd[1505]: time="2026-04-13T20:13:01.220707019Z" level=info msg="StartContainer for \"117198024d3d76c20635cf9be429d7826011c0f5f2f5d058fe043955065ecaaa\" returns successfully" Apr 13 20:13:01.224529 containerd[1505]: time="2026-04-13T20:13:01.224163389Z" level=info msg="StartContainer for \"4b7f03eceab6fd46874937b64977ad54f89bdc80c83846cfd1324c26f837614c\" returns successfully" Apr 13 20:13:01.567084 kubelet[2684]: I0413 20:13:01.566310 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-g82l4" podStartSLOduration=30.56624458 podStartE2EDuration="30.56624458s" podCreationTimestamp="2026-04-13 20:12:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:13:01.549812439 +0000 UTC m=+36.486427779" watchObservedRunningTime="2026-04-13 20:13:01.56624458 +0000 UTC m=+36.502859910" Apr 13 20:13:01.825191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1994622990.mount: Deactivated successfully. Apr 13 20:13:02.552173 kubelet[2684]: I0413 20:13:02.552094 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-hfqhq" podStartSLOduration=31.552068879 podStartE2EDuration="31.552068879s" podCreationTimestamp="2026-04-13 20:12:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:13:01.567794701 +0000 UTC m=+36.504410014" watchObservedRunningTime="2026-04-13 20:13:02.552068879 +0000 UTC m=+37.488684196" Apr 13 20:13:23.615874 systemd[1]: Started sshd@9-10.244.14.202:22-4.175.71.9:33490.service - OpenSSH per-connection server daemon (4.175.71.9:33490). Apr 13 20:13:23.788833 sshd[4074]: Accepted publickey for core from 4.175.71.9 port 33490 ssh2: RSA SHA256:gQAv84QRWsNzQzQGG1TKteeG+h41qyFSg3i58ChsR9o Apr 13 20:13:23.791899 sshd[4074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:13:23.800672 systemd-logind[1487]: New session 12 of user core. Apr 13 20:13:23.812696 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 13 20:13:24.505343 sshd[4074]: pam_unix(sshd:session): session closed for user core Apr 13 20:13:24.513660 systemd[1]: sshd@9-10.244.14.202:22-4.175.71.9:33490.service: Deactivated successfully. Apr 13 20:13:24.517343 systemd[1]: session-12.scope: Deactivated successfully. Apr 13 20:13:24.519149 systemd-logind[1487]: Session 12 logged out. Waiting for processes to exit. Apr 13 20:13:24.520761 systemd-logind[1487]: Removed session 12. Apr 13 20:13:29.537794 systemd[1]: Started sshd@10-10.244.14.202:22-4.175.71.9:45216.service - OpenSSH per-connection server daemon (4.175.71.9:45216). Apr 13 20:13:29.683266 sshd[4091]: Accepted publickey for core from 4.175.71.9 port 45216 ssh2: RSA SHA256:gQAv84QRWsNzQzQGG1TKteeG+h41qyFSg3i58ChsR9o Apr 13 20:13:29.685845 sshd[4091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:13:29.693996 systemd-logind[1487]: New session 13 of user core. Apr 13 20:13:29.705722 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 13 20:13:29.916440 sshd[4091]: pam_unix(sshd:session): session closed for user core Apr 13 20:13:29.922460 systemd[1]: sshd@10-10.244.14.202:22-4.175.71.9:45216.service: Deactivated successfully. Apr 13 20:13:29.925875 systemd[1]: session-13.scope: Deactivated successfully. Apr 13 20:13:29.927393 systemd-logind[1487]: Session 13 logged out. Waiting for processes to exit. Apr 13 20:13:29.929282 systemd-logind[1487]: Removed session 13. Apr 13 20:13:34.958171 systemd[1]: Started sshd@11-10.244.14.202:22-4.175.71.9:45218.service - OpenSSH per-connection server daemon (4.175.71.9:45218). Apr 13 20:13:35.118386 sshd[4107]: Accepted publickey for core from 4.175.71.9 port 45218 ssh2: RSA SHA256:gQAv84QRWsNzQzQGG1TKteeG+h41qyFSg3i58ChsR9o Apr 13 20:13:35.121412 sshd[4107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:13:35.131269 systemd-logind[1487]: New session 14 of user core. Apr 13 20:13:35.138746 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 13 20:13:35.331870 sshd[4107]: pam_unix(sshd:session): session closed for user core Apr 13 20:13:35.336617 systemd[1]: sshd@11-10.244.14.202:22-4.175.71.9:45218.service: Deactivated successfully. Apr 13 20:13:35.339283 systemd[1]: session-14.scope: Deactivated successfully. Apr 13 20:13:35.341252 systemd-logind[1487]: Session 14 logged out. Waiting for processes to exit. Apr 13 20:13:35.343237 systemd-logind[1487]: Removed session 14. Apr 13 20:13:40.371966 systemd[1]: Started sshd@12-10.244.14.202:22-4.175.71.9:45930.service - OpenSSH per-connection server daemon (4.175.71.9:45930). Apr 13 20:13:40.513525 sshd[4120]: Accepted publickey for core from 4.175.71.9 port 45930 ssh2: RSA SHA256:gQAv84QRWsNzQzQGG1TKteeG+h41qyFSg3i58ChsR9o Apr 13 20:13:40.515313 sshd[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:13:40.525564 systemd-logind[1487]: New session 15 of user core. Apr 13 20:13:40.529846 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 13 20:13:40.755128 sshd[4120]: pam_unix(sshd:session): session closed for user core Apr 13 20:13:40.761182 systemd-logind[1487]: Session 15 logged out. Waiting for processes to exit. Apr 13 20:13:40.762350 systemd[1]: sshd@12-10.244.14.202:22-4.175.71.9:45930.service: Deactivated successfully. Apr 13 20:13:40.766578 systemd[1]: session-15.scope: Deactivated successfully. Apr 13 20:13:40.768547 systemd-logind[1487]: Removed session 15. Apr 13 20:13:40.791845 systemd[1]: Started sshd@13-10.244.14.202:22-4.175.71.9:45942.service - OpenSSH per-connection server daemon (4.175.71.9:45942). Apr 13 20:13:40.922213 sshd[4134]: Accepted publickey for core from 4.175.71.9 port 45942 ssh2: RSA SHA256:gQAv84QRWsNzQzQGG1TKteeG+h41qyFSg3i58ChsR9o Apr 13 20:13:40.924406 sshd[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:13:40.931567 systemd-logind[1487]: New session 16 of user core. Apr 13 20:13:40.937700 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 13 20:13:41.194779 sshd[4134]: pam_unix(sshd:session): session closed for user core Apr 13 20:13:41.212892 systemd[1]: sshd@13-10.244.14.202:22-4.175.71.9:45942.service: Deactivated successfully. Apr 13 20:13:41.218179 systemd[1]: session-16.scope: Deactivated successfully. Apr 13 20:13:41.223282 systemd-logind[1487]: Session 16 logged out. Waiting for processes to exit. Apr 13 20:13:41.243624 systemd[1]: Started sshd@14-10.244.14.202:22-4.175.71.9:45944.service - OpenSSH per-connection server daemon (4.175.71.9:45944). Apr 13 20:13:41.244937 systemd-logind[1487]: Removed session 16. Apr 13 20:13:41.392078 sshd[4144]: Accepted publickey for core from 4.175.71.9 port 45944 ssh2: RSA SHA256:gQAv84QRWsNzQzQGG1TKteeG+h41qyFSg3i58ChsR9o Apr 13 20:13:41.394783 sshd[4144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:13:41.402557 systemd-logind[1487]: New session 17 of user core. Apr 13 20:13:41.410799 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 13 20:13:41.606769 sshd[4144]: pam_unix(sshd:session): session closed for user core Apr 13 20:13:41.613544 systemd[1]: sshd@14-10.244.14.202:22-4.175.71.9:45944.service: Deactivated successfully. Apr 13 20:13:41.616156 systemd[1]: session-17.scope: Deactivated successfully. Apr 13 20:13:41.617255 systemd-logind[1487]: Session 17 logged out. Waiting for processes to exit. Apr 13 20:13:41.618924 systemd-logind[1487]: Removed session 17. Apr 13 20:13:46.643965 systemd[1]: Started sshd@15-10.244.14.202:22-4.175.71.9:55566.service - OpenSSH per-connection server daemon (4.175.71.9:55566). Apr 13 20:13:46.773299 sshd[4158]: Accepted publickey for core from 4.175.71.9 port 55566 ssh2: RSA SHA256:gQAv84QRWsNzQzQGG1TKteeG+h41qyFSg3i58ChsR9o Apr 13 20:13:46.775824 sshd[4158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:13:46.784563 systemd-logind[1487]: New session 18 of user core. Apr 13 20:13:46.794828 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 13 20:13:46.995019 sshd[4158]: pam_unix(sshd:session): session closed for user core Apr 13 20:13:47.000050 systemd[1]: sshd@15-10.244.14.202:22-4.175.71.9:55566.service: Deactivated successfully. Apr 13 20:13:47.003835 systemd[1]: session-18.scope: Deactivated successfully. Apr 13 20:13:47.005216 systemd-logind[1487]: Session 18 logged out. Waiting for processes to exit. Apr 13 20:13:47.006706 systemd-logind[1487]: Removed session 18. Apr 13 20:13:52.022306 systemd[1]: Started sshd@16-10.244.14.202:22-4.175.71.9:55576.service - OpenSSH per-connection server daemon (4.175.71.9:55576). Apr 13 20:13:52.166550 sshd[4170]: Accepted publickey for core from 4.175.71.9 port 55576 ssh2: RSA SHA256:gQAv84QRWsNzQzQGG1TKteeG+h41qyFSg3i58ChsR9o Apr 13 20:13:52.169631 sshd[4170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:13:52.176900 systemd-logind[1487]: New session 19 of user core. Apr 13 20:13:52.186785 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 13 20:13:52.414336 sshd[4170]: pam_unix(sshd:session): session closed for user core Apr 13 20:13:52.420585 systemd-logind[1487]: Session 19 logged out. Waiting for processes to exit. Apr 13 20:13:52.421810 systemd[1]: sshd@16-10.244.14.202:22-4.175.71.9:55576.service: Deactivated successfully. Apr 13 20:13:52.424994 systemd[1]: session-19.scope: Deactivated successfully. Apr 13 20:13:52.426939 systemd-logind[1487]: Removed session 19. Apr 13 20:13:57.448835 systemd[1]: Started sshd@17-10.244.14.202:22-4.175.71.9:41054.service - OpenSSH per-connection server daemon (4.175.71.9:41054). Apr 13 20:13:57.610621 sshd[4183]: Accepted publickey for core from 4.175.71.9 port 41054 ssh2: RSA SHA256:gQAv84QRWsNzQzQGG1TKteeG+h41qyFSg3i58ChsR9o Apr 13 20:13:57.613387 sshd[4183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:13:57.621257 systemd-logind[1487]: New session 20 of user core. Apr 13 20:13:57.631699 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 13 20:13:57.817980 sshd[4183]: pam_unix(sshd:session): session closed for user core Apr 13 20:13:57.822856 systemd[1]: sshd@17-10.244.14.202:22-4.175.71.9:41054.service: Deactivated successfully. Apr 13 20:13:57.825376 systemd[1]: session-20.scope: Deactivated successfully. Apr 13 20:13:57.827666 systemd-logind[1487]: Session 20 logged out. Waiting for processes to exit. Apr 13 20:13:57.829731 systemd-logind[1487]: Removed session 20. Apr 13 20:13:57.847837 systemd[1]: Started sshd@18-10.244.14.202:22-4.175.71.9:41070.service - OpenSSH per-connection server daemon (4.175.71.9:41070). Apr 13 20:13:57.985535 sshd[4196]: Accepted publickey for core from 4.175.71.9 port 41070 ssh2: RSA SHA256:gQAv84QRWsNzQzQGG1TKteeG+h41qyFSg3i58ChsR9o Apr 13 20:13:57.986915 sshd[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:13:57.994282 systemd-logind[1487]: New session 21 of user core. Apr 13 20:13:57.999693 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 13 20:13:58.568695 sshd[4196]: pam_unix(sshd:session): session closed for user core Apr 13 20:13:58.574670 systemd[1]: sshd@18-10.244.14.202:22-4.175.71.9:41070.service: Deactivated successfully. Apr 13 20:13:58.578202 systemd[1]: session-21.scope: Deactivated successfully. Apr 13 20:13:58.579535 systemd-logind[1487]: Session 21 logged out. Waiting for processes to exit. Apr 13 20:13:58.581147 systemd-logind[1487]: Removed session 21. Apr 13 20:13:58.602864 systemd[1]: Started sshd@19-10.244.14.202:22-4.175.71.9:41086.service - OpenSSH per-connection server daemon (4.175.71.9:41086). Apr 13 20:13:58.753433 sshd[4207]: Accepted publickey for core from 4.175.71.9 port 41086 ssh2: RSA SHA256:gQAv84QRWsNzQzQGG1TKteeG+h41qyFSg3i58ChsR9o Apr 13 20:13:58.755559 sshd[4207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:13:58.763771 systemd-logind[1487]: New session 22 of user core. Apr 13 20:13:58.774779 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 13 20:13:59.685997 sshd[4207]: pam_unix(sshd:session): session closed for user core Apr 13 20:13:59.714403 systemd[1]: sshd@19-10.244.14.202:22-4.175.71.9:41086.service: Deactivated successfully. Apr 13 20:13:59.721033 systemd[1]: session-22.scope: Deactivated successfully. Apr 13 20:13:59.724859 systemd-logind[1487]: Session 22 logged out. Waiting for processes to exit. Apr 13 20:13:59.736034 systemd[1]: Started sshd@20-10.244.14.202:22-4.175.71.9:41094.service - OpenSSH per-connection server daemon (4.175.71.9:41094). Apr 13 20:13:59.738879 systemd-logind[1487]: Removed session 22. Apr 13 20:13:59.880339 sshd[4224]: Accepted publickey for core from 4.175.71.9 port 41094 ssh2: RSA SHA256:gQAv84QRWsNzQzQGG1TKteeG+h41qyFSg3i58ChsR9o Apr 13 20:13:59.882843 sshd[4224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:13:59.890648 systemd-logind[1487]: New session 23 of user core. Apr 13 20:13:59.898882 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 13 20:14:00.317781 sshd[4224]: pam_unix(sshd:session): session closed for user core Apr 13 20:14:00.323608 systemd[1]: sshd@20-10.244.14.202:22-4.175.71.9:41094.service: Deactivated successfully. Apr 13 20:14:00.329371 systemd[1]: session-23.scope: Deactivated successfully. Apr 13 20:14:00.332900 systemd-logind[1487]: Session 23 logged out. Waiting for processes to exit. Apr 13 20:14:00.335091 systemd-logind[1487]: Removed session 23. Apr 13 20:14:00.361921 systemd[1]: Started sshd@21-10.244.14.202:22-4.175.71.9:41104.service - OpenSSH per-connection server daemon (4.175.71.9:41104). Apr 13 20:14:00.492241 sshd[4234]: Accepted publickey for core from 4.175.71.9 port 41104 ssh2: RSA SHA256:gQAv84QRWsNzQzQGG1TKteeG+h41qyFSg3i58ChsR9o Apr 13 20:14:00.496157 sshd[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:14:00.506567 systemd-logind[1487]: New session 24 of user core. Apr 13 20:14:00.513724 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 13 20:14:00.725298 sshd[4234]: pam_unix(sshd:session): session closed for user core Apr 13 20:14:00.732017 systemd-logind[1487]: Session 24 logged out. Waiting for processes to exit. Apr 13 20:14:00.732875 systemd[1]: sshd@21-10.244.14.202:22-4.175.71.9:41104.service: Deactivated successfully. Apr 13 20:14:00.735612 systemd[1]: session-24.scope: Deactivated successfully. Apr 13 20:14:00.737186 systemd-logind[1487]: Removed session 24. Apr 13 20:14:05.756844 systemd[1]: Started sshd@22-10.244.14.202:22-4.175.71.9:43558.service - OpenSSH per-connection server daemon (4.175.71.9:43558). Apr 13 20:14:05.899671 sshd[4249]: Accepted publickey for core from 4.175.71.9 port 43558 ssh2: RSA SHA256:gQAv84QRWsNzQzQGG1TKteeG+h41qyFSg3i58ChsR9o Apr 13 20:14:05.901694 sshd[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:14:05.909991 systemd-logind[1487]: New session 25 of user core. Apr 13 20:14:05.920881 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 13 20:14:06.107825 sshd[4249]: pam_unix(sshd:session): session closed for user core Apr 13 20:14:06.111842 systemd[1]: sshd@22-10.244.14.202:22-4.175.71.9:43558.service: Deactivated successfully. Apr 13 20:14:06.116097 systemd[1]: session-25.scope: Deactivated successfully. Apr 13 20:14:06.118711 systemd-logind[1487]: Session 25 logged out. Waiting for processes to exit. Apr 13 20:14:06.120299 systemd-logind[1487]: Removed session 25. Apr 13 20:14:11.139858 systemd[1]: Started sshd@23-10.244.14.202:22-4.175.71.9:43572.service - OpenSSH per-connection server daemon (4.175.71.9:43572). Apr 13 20:14:11.272513 sshd[4264]: Accepted publickey for core from 4.175.71.9 port 43572 ssh2: RSA SHA256:gQAv84QRWsNzQzQGG1TKteeG+h41qyFSg3i58ChsR9o Apr 13 20:14:11.275000 sshd[4264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:14:11.282135 systemd-logind[1487]: New session 26 of user core. Apr 13 20:14:11.296744 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 13 20:14:11.483219 sshd[4264]: pam_unix(sshd:session): session closed for user core Apr 13 20:14:11.489576 systemd[1]: sshd@23-10.244.14.202:22-4.175.71.9:43572.service: Deactivated successfully. Apr 13 20:14:11.493151 systemd[1]: session-26.scope: Deactivated successfully. Apr 13 20:14:11.495287 systemd-logind[1487]: Session 26 logged out. Waiting for processes to exit. Apr 13 20:14:11.497099 systemd-logind[1487]: Removed session 26. Apr 13 20:14:16.509746 systemd[1]: Started sshd@24-10.244.14.202:22-4.175.71.9:57100.service - OpenSSH per-connection server daemon (4.175.71.9:57100). Apr 13 20:14:16.648727 sshd[4277]: Accepted publickey for core from 4.175.71.9 port 57100 ssh2: RSA SHA256:gQAv84QRWsNzQzQGG1TKteeG+h41qyFSg3i58ChsR9o Apr 13 20:14:16.651605 sshd[4277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:14:16.658576 systemd-logind[1487]: New session 27 of user core. Apr 13 20:14:16.671783 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 13 20:14:16.861186 sshd[4277]: pam_unix(sshd:session): session closed for user core Apr 13 20:14:16.867249 systemd[1]: sshd@24-10.244.14.202:22-4.175.71.9:57100.service: Deactivated successfully. Apr 13 20:14:16.871954 systemd[1]: session-27.scope: Deactivated successfully. Apr 13 20:14:16.873355 systemd-logind[1487]: Session 27 logged out. Waiting for processes to exit. Apr 13 20:14:16.876515 systemd-logind[1487]: Removed session 27. Apr 13 20:14:16.895100 systemd[1]: Started sshd@25-10.244.14.202:22-4.175.71.9:57106.service - OpenSSH per-connection server daemon (4.175.71.9:57106). Apr 13 20:14:17.029009 sshd[4290]: Accepted publickey for core from 4.175.71.9 port 57106 ssh2: RSA SHA256:gQAv84QRWsNzQzQGG1TKteeG+h41qyFSg3i58ChsR9o Apr 13 20:14:17.030057 sshd[4290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:14:17.037600 systemd-logind[1487]: New session 28 of user core. Apr 13 20:14:17.042940 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 13 20:14:19.279235 containerd[1505]: time="2026-04-13T20:14:19.279016405Z" level=info msg="StopContainer for \"f13851e76c0188bcf471f5de70254c4db69bbb90943a1bac222d4b1f693f5559\" with timeout 30 (s)" Apr 13 20:14:19.284334 containerd[1505]: time="2026-04-13T20:14:19.283217702Z" level=info msg="Stop container \"f13851e76c0188bcf471f5de70254c4db69bbb90943a1bac222d4b1f693f5559\" with signal terminated" Apr 13 20:14:19.312128 systemd[1]: cri-containerd-f13851e76c0188bcf471f5de70254c4db69bbb90943a1bac222d4b1f693f5559.scope: Deactivated successfully. Apr 13 20:14:19.364589 containerd[1505]: time="2026-04-13T20:14:19.364467564Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 20:14:19.373372 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f13851e76c0188bcf471f5de70254c4db69bbb90943a1bac222d4b1f693f5559-rootfs.mount: Deactivated successfully. Apr 13 20:14:19.375629 containerd[1505]: time="2026-04-13T20:14:19.375591359Z" level=info msg="StopContainer for \"7dc35dc20a20619cbc118cdfa31bf42332aa40cc45afb92d5971d0991e9e0edd\" with timeout 2 (s)" Apr 13 20:14:19.376655 containerd[1505]: time="2026-04-13T20:14:19.376179033Z" level=info msg="Stop container \"7dc35dc20a20619cbc118cdfa31bf42332aa40cc45afb92d5971d0991e9e0edd\" with signal terminated" Apr 13 20:14:19.379664 containerd[1505]: time="2026-04-13T20:14:19.379433461Z" level=info msg="shim disconnected" id=f13851e76c0188bcf471f5de70254c4db69bbb90943a1bac222d4b1f693f5559 namespace=k8s.io Apr 13 20:14:19.379664 containerd[1505]: time="2026-04-13T20:14:19.379584103Z" level=warning msg="cleaning up after shim disconnected" id=f13851e76c0188bcf471f5de70254c4db69bbb90943a1bac222d4b1f693f5559 namespace=k8s.io Apr 13 20:14:19.379664 containerd[1505]: time="2026-04-13T20:14:19.379608365Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:14:19.393764 systemd-networkd[1414]: lxc_health: Link DOWN Apr 13 20:14:19.395098 systemd-networkd[1414]: lxc_health: Lost carrier Apr 13 20:14:19.418993 systemd[1]: cri-containerd-7dc35dc20a20619cbc118cdfa31bf42332aa40cc45afb92d5971d0991e9e0edd.scope: Deactivated successfully. Apr 13 20:14:19.419427 systemd[1]: cri-containerd-7dc35dc20a20619cbc118cdfa31bf42332aa40cc45afb92d5971d0991e9e0edd.scope: Consumed 10.402s CPU time. Apr 13 20:14:19.433692 containerd[1505]: time="2026-04-13T20:14:19.433394490Z" level=info msg="StopContainer for \"f13851e76c0188bcf471f5de70254c4db69bbb90943a1bac222d4b1f693f5559\" returns successfully" Apr 13 20:14:19.434783 containerd[1505]: time="2026-04-13T20:14:19.434586579Z" level=info msg="StopPodSandbox for \"8abf037fe15edb0adc10cf852339384ee54f46553ff91364734008c6cd2456f7\"" Apr 13 20:14:19.434783 containerd[1505]: time="2026-04-13T20:14:19.434640953Z" level=info msg="Container to stop \"f13851e76c0188bcf471f5de70254c4db69bbb90943a1bac222d4b1f693f5559\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 20:14:19.438252 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8abf037fe15edb0adc10cf852339384ee54f46553ff91364734008c6cd2456f7-shm.mount: Deactivated successfully. Apr 13 20:14:19.449242 systemd[1]: cri-containerd-8abf037fe15edb0adc10cf852339384ee54f46553ff91364734008c6cd2456f7.scope: Deactivated successfully. Apr 13 20:14:19.469454 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7dc35dc20a20619cbc118cdfa31bf42332aa40cc45afb92d5971d0991e9e0edd-rootfs.mount: Deactivated successfully. Apr 13 20:14:19.481173 containerd[1505]: time="2026-04-13T20:14:19.481011098Z" level=info msg="shim disconnected" id=7dc35dc20a20619cbc118cdfa31bf42332aa40cc45afb92d5971d0991e9e0edd namespace=k8s.io Apr 13 20:14:19.481548 containerd[1505]: time="2026-04-13T20:14:19.481517280Z" level=warning msg="cleaning up after shim disconnected" id=7dc35dc20a20619cbc118cdfa31bf42332aa40cc45afb92d5971d0991e9e0edd namespace=k8s.io Apr 13 20:14:19.481714 containerd[1505]: time="2026-04-13T20:14:19.481689074Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:14:19.511380 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8abf037fe15edb0adc10cf852339384ee54f46553ff91364734008c6cd2456f7-rootfs.mount: Deactivated successfully. Apr 13 20:14:19.518740 containerd[1505]: time="2026-04-13T20:14:19.518544744Z" level=info msg="shim disconnected" id=8abf037fe15edb0adc10cf852339384ee54f46553ff91364734008c6cd2456f7 namespace=k8s.io Apr 13 20:14:19.519188 containerd[1505]: time="2026-04-13T20:14:19.519013342Z" level=warning msg="cleaning up after shim disconnected" id=8abf037fe15edb0adc10cf852339384ee54f46553ff91364734008c6cd2456f7 namespace=k8s.io Apr 13 20:14:19.519188 containerd[1505]: time="2026-04-13T20:14:19.519081771Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:14:19.530019 containerd[1505]: time="2026-04-13T20:14:19.529462733Z" level=info msg="StopContainer for \"7dc35dc20a20619cbc118cdfa31bf42332aa40cc45afb92d5971d0991e9e0edd\" returns successfully" Apr 13 20:14:19.531784 containerd[1505]: time="2026-04-13T20:14:19.531735167Z" level=info msg="StopPodSandbox for \"5d52afba6a2ed50a161a074ee2f4d8263d7126395e2fd5188e10f46838112f45\"" Apr 13 20:14:19.532529 containerd[1505]: time="2026-04-13T20:14:19.532457114Z" level=info msg="Container to stop \"a4162b0349c514c1251aff98b48b0adcf3990a7a93574590340fb2f356540a99\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 20:14:19.532681 containerd[1505]: time="2026-04-13T20:14:19.532651916Z" level=info msg="Container to stop \"4dd764fcda1928dac2cd0ec5caca6497df3d4ede1c476f9ecea9d46d7cc41eb4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 20:14:19.532840 containerd[1505]: time="2026-04-13T20:14:19.532798630Z" level=info msg="Container to stop \"fc260800cbac89b3b02e723f5a6c6a30828e6b229eab3c627d0be4237c28ffdd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 20:14:19.533256 containerd[1505]: time="2026-04-13T20:14:19.533105910Z" level=info msg="Container to stop \"7dc35dc20a20619cbc118cdfa31bf42332aa40cc45afb92d5971d0991e9e0edd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 20:14:19.533256 containerd[1505]: time="2026-04-13T20:14:19.533134460Z" level=info msg="Container to stop \"99681b0bc23f723a935b73c31be2027348880798345510ddb70c1b131cf262f4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 20:14:19.546060 systemd[1]: cri-containerd-5d52afba6a2ed50a161a074ee2f4d8263d7126395e2fd5188e10f46838112f45.scope: Deactivated successfully. Apr 13 20:14:19.559624 containerd[1505]: time="2026-04-13T20:14:19.559379845Z" level=info msg="TearDown network for sandbox \"8abf037fe15edb0adc10cf852339384ee54f46553ff91364734008c6cd2456f7\" successfully" Apr 13 20:14:19.559624 containerd[1505]: time="2026-04-13T20:14:19.559448504Z" level=info msg="StopPodSandbox for \"8abf037fe15edb0adc10cf852339384ee54f46553ff91364734008c6cd2456f7\" returns successfully" Apr 13 20:14:19.593988 containerd[1505]: time="2026-04-13T20:14:19.593841181Z" level=info msg="shim disconnected" id=5d52afba6a2ed50a161a074ee2f4d8263d7126395e2fd5188e10f46838112f45 namespace=k8s.io Apr 13 20:14:19.593988 containerd[1505]: time="2026-04-13T20:14:19.593914041Z" level=warning msg="cleaning up after shim disconnected" id=5d52afba6a2ed50a161a074ee2f4d8263d7126395e2fd5188e10f46838112f45 namespace=k8s.io Apr 13 20:14:19.595228 containerd[1505]: time="2026-04-13T20:14:19.593929168Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:14:19.613800 containerd[1505]: time="2026-04-13T20:14:19.613740740Z" level=info msg="TearDown network for sandbox \"5d52afba6a2ed50a161a074ee2f4d8263d7126395e2fd5188e10f46838112f45\" successfully" Apr 13 20:14:19.613800 containerd[1505]: time="2026-04-13T20:14:19.613792619Z" level=info msg="StopPodSandbox for \"5d52afba6a2ed50a161a074ee2f4d8263d7126395e2fd5188e10f46838112f45\" returns successfully" Apr 13 20:14:19.721906 kubelet[2684]: I0413 20:14:19.721513 2684 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d7658e91-f05e-4ffb-b887-48a8f6089db3-bpf-maps\") pod \"d7658e91-f05e-4ffb-b887-48a8f6089db3\" (UID: \"d7658e91-f05e-4ffb-b887-48a8f6089db3\") " Apr 13 20:14:19.721906 kubelet[2684]: I0413 20:14:19.721606 2684 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d7658e91-f05e-4ffb-b887-48a8f6089db3-clustermesh-secrets\") pod \"d7658e91-f05e-4ffb-b887-48a8f6089db3\" (UID: \"d7658e91-f05e-4ffb-b887-48a8f6089db3\") " Apr 13 20:14:19.721906 kubelet[2684]: I0413 20:14:19.721639 2684 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d7658e91-f05e-4ffb-b887-48a8f6089db3-cilium-cgroup\") pod \"d7658e91-f05e-4ffb-b887-48a8f6089db3\" (UID: \"d7658e91-f05e-4ffb-b887-48a8f6089db3\") " Apr 13 20:14:19.721906 kubelet[2684]: I0413 20:14:19.721679 2684 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d7658e91-f05e-4ffb-b887-48a8f6089db3-cilium-run\") pod \"d7658e91-f05e-4ffb-b887-48a8f6089db3\" (UID: \"d7658e91-f05e-4ffb-b887-48a8f6089db3\") " Apr 13 20:14:19.721906 kubelet[2684]: I0413 20:14:19.721710 2684 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rt5mb\" (UniqueName: \"kubernetes.io/projected/d7658e91-f05e-4ffb-b887-48a8f6089db3-kube-api-access-rt5mb\") pod \"d7658e91-f05e-4ffb-b887-48a8f6089db3\" (UID: \"d7658e91-f05e-4ffb-b887-48a8f6089db3\") " Apr 13 20:14:19.721906 kubelet[2684]: I0413 20:14:19.721740 2684 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hv7q6\" (UniqueName: \"kubernetes.io/projected/04216b05-b39a-4b02-82dd-60f52e548622-kube-api-access-hv7q6\") pod \"04216b05-b39a-4b02-82dd-60f52e548622\" (UID: \"04216b05-b39a-4b02-82dd-60f52e548622\") " Apr 13 20:14:19.722938 kubelet[2684]: I0413 20:14:19.721772 2684 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d7658e91-f05e-4ffb-b887-48a8f6089db3-hubble-tls\") pod \"d7658e91-f05e-4ffb-b887-48a8f6089db3\" (UID: \"d7658e91-f05e-4ffb-b887-48a8f6089db3\") " Apr 13 20:14:19.722938 kubelet[2684]: I0413 20:14:19.721810 2684 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d7658e91-f05e-4ffb-b887-48a8f6089db3-hostproc\") pod \"d7658e91-f05e-4ffb-b887-48a8f6089db3\" (UID: \"d7658e91-f05e-4ffb-b887-48a8f6089db3\") " Apr 13 20:14:19.722938 kubelet[2684]: I0413 20:14:19.721843 2684 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d7658e91-f05e-4ffb-b887-48a8f6089db3-host-proc-sys-net\") pod \"d7658e91-f05e-4ffb-b887-48a8f6089db3\" (UID: \"d7658e91-f05e-4ffb-b887-48a8f6089db3\") " Apr 13 20:14:19.722938 kubelet[2684]: I0413 20:14:19.721879 2684 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d7658e91-f05e-4ffb-b887-48a8f6089db3-xtables-lock\") pod \"d7658e91-f05e-4ffb-b887-48a8f6089db3\" (UID: \"d7658e91-f05e-4ffb-b887-48a8f6089db3\") " Apr 13 20:14:19.722938 kubelet[2684]: I0413 20:14:19.721905 2684 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d7658e91-f05e-4ffb-b887-48a8f6089db3-etc-cni-netd\") pod \"d7658e91-f05e-4ffb-b887-48a8f6089db3\" (UID: \"d7658e91-f05e-4ffb-b887-48a8f6089db3\") " Apr 13 20:14:19.722938 kubelet[2684]: I0413 20:14:19.721944 2684 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/04216b05-b39a-4b02-82dd-60f52e548622-cilium-config-path\") pod \"04216b05-b39a-4b02-82dd-60f52e548622\" (UID: \"04216b05-b39a-4b02-82dd-60f52e548622\") " Apr 13 20:14:19.723231 kubelet[2684]: I0413 20:14:19.721971 2684 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d7658e91-f05e-4ffb-b887-48a8f6089db3-lib-modules\") pod \"d7658e91-f05e-4ffb-b887-48a8f6089db3\" (UID: \"d7658e91-f05e-4ffb-b887-48a8f6089db3\") " Apr 13 20:14:19.723231 kubelet[2684]: I0413 20:14:19.722004 2684 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d7658e91-f05e-4ffb-b887-48a8f6089db3-host-proc-sys-kernel\") pod \"d7658e91-f05e-4ffb-b887-48a8f6089db3\" (UID: \"d7658e91-f05e-4ffb-b887-48a8f6089db3\") " Apr 13 20:14:19.723231 kubelet[2684]: I0413 20:14:19.722044 2684 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d7658e91-f05e-4ffb-b887-48a8f6089db3-cni-path\") pod \"d7658e91-f05e-4ffb-b887-48a8f6089db3\" (UID: \"d7658e91-f05e-4ffb-b887-48a8f6089db3\") " Apr 13 20:14:19.723231 kubelet[2684]: I0413 20:14:19.722079 2684 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d7658e91-f05e-4ffb-b887-48a8f6089db3-cilium-config-path\") pod \"d7658e91-f05e-4ffb-b887-48a8f6089db3\" (UID: \"d7658e91-f05e-4ffb-b887-48a8f6089db3\") " Apr 13 20:14:19.727328 kubelet[2684]: I0413 20:14:19.723512 2684 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7658e91-f05e-4ffb-b887-48a8f6089db3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d7658e91-f05e-4ffb-b887-48a8f6089db3" (UID: "d7658e91-f05e-4ffb-b887-48a8f6089db3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 20:14:19.727328 kubelet[2684]: I0413 20:14:19.725607 2684 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7658e91-f05e-4ffb-b887-48a8f6089db3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d7658e91-f05e-4ffb-b887-48a8f6089db3" (UID: "d7658e91-f05e-4ffb-b887-48a8f6089db3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 20:14:19.727328 kubelet[2684]: I0413 20:14:19.725644 2684 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7658e91-f05e-4ffb-b887-48a8f6089db3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d7658e91-f05e-4ffb-b887-48a8f6089db3" (UID: "d7658e91-f05e-4ffb-b887-48a8f6089db3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 20:14:19.727328 kubelet[2684]: I0413 20:14:19.725673 2684 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7658e91-f05e-4ffb-b887-48a8f6089db3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d7658e91-f05e-4ffb-b887-48a8f6089db3" (UID: "d7658e91-f05e-4ffb-b887-48a8f6089db3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 20:14:19.729817 kubelet[2684]: I0413 20:14:19.728630 2684 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7658e91-f05e-4ffb-b887-48a8f6089db3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d7658e91-f05e-4ffb-b887-48a8f6089db3" (UID: "d7658e91-f05e-4ffb-b887-48a8f6089db3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 20:14:19.729817 kubelet[2684]: I0413 20:14:19.728686 2684 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7658e91-f05e-4ffb-b887-48a8f6089db3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d7658e91-f05e-4ffb-b887-48a8f6089db3" (UID: "d7658e91-f05e-4ffb-b887-48a8f6089db3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 20:14:19.731189 kubelet[2684]: I0413 20:14:19.731148 2684 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7658e91-f05e-4ffb-b887-48a8f6089db3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d7658e91-f05e-4ffb-b887-48a8f6089db3" (UID: "d7658e91-f05e-4ffb-b887-48a8f6089db3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 20:14:19.734613 kubelet[2684]: I0413 20:14:19.731326 2684 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7658e91-f05e-4ffb-b887-48a8f6089db3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d7658e91-f05e-4ffb-b887-48a8f6089db3" (UID: "d7658e91-f05e-4ffb-b887-48a8f6089db3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 20:14:19.734613 kubelet[2684]: I0413 20:14:19.731359 2684 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7658e91-f05e-4ffb-b887-48a8f6089db3-cni-path" (OuterVolumeSpecName: "cni-path") pod "d7658e91-f05e-4ffb-b887-48a8f6089db3" (UID: "d7658e91-f05e-4ffb-b887-48a8f6089db3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 20:14:19.734911 kubelet[2684]: I0413 20:14:19.734871 2684 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7658e91-f05e-4ffb-b887-48a8f6089db3-hostproc" (OuterVolumeSpecName: "hostproc") pod "d7658e91-f05e-4ffb-b887-48a8f6089db3" (UID: "d7658e91-f05e-4ffb-b887-48a8f6089db3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 20:14:19.739128 kubelet[2684]: I0413 20:14:19.739085 2684 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7658e91-f05e-4ffb-b887-48a8f6089db3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d7658e91-f05e-4ffb-b887-48a8f6089db3" (UID: "d7658e91-f05e-4ffb-b887-48a8f6089db3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 13 20:14:19.740838 kubelet[2684]: I0413 20:14:19.740800 2684 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7658e91-f05e-4ffb-b887-48a8f6089db3-kube-api-access-rt5mb" (OuterVolumeSpecName: "kube-api-access-rt5mb") pod "d7658e91-f05e-4ffb-b887-48a8f6089db3" (UID: "d7658e91-f05e-4ffb-b887-48a8f6089db3"). InnerVolumeSpecName "kube-api-access-rt5mb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 13 20:14:19.743015 kubelet[2684]: I0413 20:14:19.742984 2684 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7658e91-f05e-4ffb-b887-48a8f6089db3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d7658e91-f05e-4ffb-b887-48a8f6089db3" (UID: "d7658e91-f05e-4ffb-b887-48a8f6089db3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 20:14:19.744582 kubelet[2684]: I0413 20:14:19.744430 2684 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04216b05-b39a-4b02-82dd-60f52e548622-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "04216b05-b39a-4b02-82dd-60f52e548622" (UID: "04216b05-b39a-4b02-82dd-60f52e548622"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 20:14:19.752572 kubelet[2684]: I0413 20:14:19.751531 2684 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04216b05-b39a-4b02-82dd-60f52e548622-kube-api-access-hv7q6" (OuterVolumeSpecName: "kube-api-access-hv7q6") pod "04216b05-b39a-4b02-82dd-60f52e548622" (UID: "04216b05-b39a-4b02-82dd-60f52e548622"). InnerVolumeSpecName "kube-api-access-hv7q6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 13 20:14:19.753209 kubelet[2684]: I0413 20:14:19.753123 2684 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7658e91-f05e-4ffb-b887-48a8f6089db3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d7658e91-f05e-4ffb-b887-48a8f6089db3" (UID: "d7658e91-f05e-4ffb-b887-48a8f6089db3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 13 20:14:19.759505 kubelet[2684]: I0413 20:14:19.759122 2684 scope.go:117] "RemoveContainer" containerID="7dc35dc20a20619cbc118cdfa31bf42332aa40cc45afb92d5971d0991e9e0edd" Apr 13 20:14:19.761392 containerd[1505]: time="2026-04-13T20:14:19.761351660Z" level=info msg="RemoveContainer for \"7dc35dc20a20619cbc118cdfa31bf42332aa40cc45afb92d5971d0991e9e0edd\"" Apr 13 20:14:19.766091 systemd[1]: Removed slice kubepods-besteffort-pod04216b05_b39a_4b02_82dd_60f52e548622.slice - libcontainer container kubepods-besteffort-pod04216b05_b39a_4b02_82dd_60f52e548622.slice. Apr 13 20:14:19.770628 containerd[1505]: time="2026-04-13T20:14:19.770525073Z" level=info msg="RemoveContainer for \"7dc35dc20a20619cbc118cdfa31bf42332aa40cc45afb92d5971d0991e9e0edd\" returns successfully" Apr 13 20:14:19.771262 kubelet[2684]: I0413 20:14:19.771125 2684 scope.go:117] "RemoveContainer" containerID="99681b0bc23f723a935b73c31be2027348880798345510ddb70c1b131cf262f4" Apr 13 20:14:19.773378 containerd[1505]: time="2026-04-13T20:14:19.773345635Z" level=info msg="RemoveContainer for \"99681b0bc23f723a935b73c31be2027348880798345510ddb70c1b131cf262f4\"" Apr 13 20:14:19.777898 containerd[1505]: time="2026-04-13T20:14:19.777606349Z" level=info msg="RemoveContainer for \"99681b0bc23f723a935b73c31be2027348880798345510ddb70c1b131cf262f4\" returns successfully" Apr 13 20:14:19.777983 kubelet[2684]: I0413 20:14:19.777805 2684 scope.go:117] "RemoveContainer" containerID="fc260800cbac89b3b02e723f5a6c6a30828e6b229eab3c627d0be4237c28ffdd" Apr 13 20:14:19.780569 containerd[1505]: time="2026-04-13T20:14:19.780221397Z" level=info msg="RemoveContainer for \"fc260800cbac89b3b02e723f5a6c6a30828e6b229eab3c627d0be4237c28ffdd\"" Apr 13 20:14:19.788212 containerd[1505]: time="2026-04-13T20:14:19.787379520Z" level=info msg="RemoveContainer for \"fc260800cbac89b3b02e723f5a6c6a30828e6b229eab3c627d0be4237c28ffdd\" returns successfully" Apr 13 20:14:19.788397 kubelet[2684]: I0413 20:14:19.787719 2684 scope.go:117] "RemoveContainer" containerID="4dd764fcda1928dac2cd0ec5caca6497df3d4ede1c476f9ecea9d46d7cc41eb4" Apr 13 20:14:19.791143 containerd[1505]: time="2026-04-13T20:14:19.790505019Z" level=info msg="RemoveContainer for \"4dd764fcda1928dac2cd0ec5caca6497df3d4ede1c476f9ecea9d46d7cc41eb4\"" Apr 13 20:14:19.794489 containerd[1505]: time="2026-04-13T20:14:19.794355848Z" level=info msg="RemoveContainer for \"4dd764fcda1928dac2cd0ec5caca6497df3d4ede1c476f9ecea9d46d7cc41eb4\" returns successfully" Apr 13 20:14:19.794707 kubelet[2684]: I0413 20:14:19.794675 2684 scope.go:117] "RemoveContainer" containerID="a4162b0349c514c1251aff98b48b0adcf3990a7a93574590340fb2f356540a99" Apr 13 20:14:19.795942 containerd[1505]: time="2026-04-13T20:14:19.795909301Z" level=info msg="RemoveContainer for \"a4162b0349c514c1251aff98b48b0adcf3990a7a93574590340fb2f356540a99\"" Apr 13 20:14:19.799012 containerd[1505]: time="2026-04-13T20:14:19.798971581Z" level=info msg="RemoveContainer for \"a4162b0349c514c1251aff98b48b0adcf3990a7a93574590340fb2f356540a99\" returns successfully" Apr 13 20:14:19.799391 kubelet[2684]: I0413 20:14:19.799204 2684 scope.go:117] "RemoveContainer" containerID="7dc35dc20a20619cbc118cdfa31bf42332aa40cc45afb92d5971d0991e9e0edd" Apr 13 20:14:19.810944 containerd[1505]: time="2026-04-13T20:14:19.802709841Z" level=error msg="ContainerStatus for \"7dc35dc20a20619cbc118cdfa31bf42332aa40cc45afb92d5971d0991e9e0edd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7dc35dc20a20619cbc118cdfa31bf42332aa40cc45afb92d5971d0991e9e0edd\": not found" Apr 13 20:14:19.811797 kubelet[2684]: E0413 20:14:19.811373 2684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7dc35dc20a20619cbc118cdfa31bf42332aa40cc45afb92d5971d0991e9e0edd\": not found" containerID="7dc35dc20a20619cbc118cdfa31bf42332aa40cc45afb92d5971d0991e9e0edd" Apr 13 20:14:19.817732 kubelet[2684]: I0413 20:14:19.812595 2684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7dc35dc20a20619cbc118cdfa31bf42332aa40cc45afb92d5971d0991e9e0edd"} err="failed to get container status \"7dc35dc20a20619cbc118cdfa31bf42332aa40cc45afb92d5971d0991e9e0edd\": rpc error: code = NotFound desc = an error occurred when try to find container \"7dc35dc20a20619cbc118cdfa31bf42332aa40cc45afb92d5971d0991e9e0edd\": not found" Apr 13 20:14:19.817732 kubelet[2684]: I0413 20:14:19.817687 2684 scope.go:117] "RemoveContainer" containerID="99681b0bc23f723a935b73c31be2027348880798345510ddb70c1b131cf262f4" Apr 13 20:14:19.818246 containerd[1505]: time="2026-04-13T20:14:19.818201245Z" level=error msg="ContainerStatus for \"99681b0bc23f723a935b73c31be2027348880798345510ddb70c1b131cf262f4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"99681b0bc23f723a935b73c31be2027348880798345510ddb70c1b131cf262f4\": not found" Apr 13 20:14:19.818632 kubelet[2684]: E0413 20:14:19.818457 2684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"99681b0bc23f723a935b73c31be2027348880798345510ddb70c1b131cf262f4\": not found" containerID="99681b0bc23f723a935b73c31be2027348880798345510ddb70c1b131cf262f4" Apr 13 20:14:19.818632 kubelet[2684]: I0413 20:14:19.818529 2684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"99681b0bc23f723a935b73c31be2027348880798345510ddb70c1b131cf262f4"} err="failed to get container status \"99681b0bc23f723a935b73c31be2027348880798345510ddb70c1b131cf262f4\": rpc error: code = NotFound desc = an error occurred when try to find container \"99681b0bc23f723a935b73c31be2027348880798345510ddb70c1b131cf262f4\": not found" Apr 13 20:14:19.818632 kubelet[2684]: I0413 20:14:19.818553 2684 scope.go:117] "RemoveContainer" containerID="fc260800cbac89b3b02e723f5a6c6a30828e6b229eab3c627d0be4237c28ffdd" Apr 13 20:14:19.819098 containerd[1505]: time="2026-04-13T20:14:19.819032891Z" level=error msg="ContainerStatus for \"fc260800cbac89b3b02e723f5a6c6a30828e6b229eab3c627d0be4237c28ffdd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fc260800cbac89b3b02e723f5a6c6a30828e6b229eab3c627d0be4237c28ffdd\": not found" Apr 13 20:14:19.819392 kubelet[2684]: E0413 20:14:19.819251 2684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fc260800cbac89b3b02e723f5a6c6a30828e6b229eab3c627d0be4237c28ffdd\": not found" containerID="fc260800cbac89b3b02e723f5a6c6a30828e6b229eab3c627d0be4237c28ffdd" Apr 13 20:14:19.819392 kubelet[2684]: I0413 20:14:19.819284 2684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fc260800cbac89b3b02e723f5a6c6a30828e6b229eab3c627d0be4237c28ffdd"} err="failed to get container status \"fc260800cbac89b3b02e723f5a6c6a30828e6b229eab3c627d0be4237c28ffdd\": rpc error: code = NotFound desc = an error occurred when try to find container \"fc260800cbac89b3b02e723f5a6c6a30828e6b229eab3c627d0be4237c28ffdd\": not found" Apr 13 20:14:19.819392 kubelet[2684]: I0413 20:14:19.819307 2684 scope.go:117] "RemoveContainer" containerID="4dd764fcda1928dac2cd0ec5caca6497df3d4ede1c476f9ecea9d46d7cc41eb4" Apr 13 20:14:19.819606 containerd[1505]: time="2026-04-13T20:14:19.819553856Z" level=error msg="ContainerStatus for \"4dd764fcda1928dac2cd0ec5caca6497df3d4ede1c476f9ecea9d46d7cc41eb4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4dd764fcda1928dac2cd0ec5caca6497df3d4ede1c476f9ecea9d46d7cc41eb4\": not found" Apr 13 20:14:19.820060 kubelet[2684]: E0413 20:14:19.819832 2684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4dd764fcda1928dac2cd0ec5caca6497df3d4ede1c476f9ecea9d46d7cc41eb4\": not found" containerID="4dd764fcda1928dac2cd0ec5caca6497df3d4ede1c476f9ecea9d46d7cc41eb4" Apr 13 20:14:19.820060 kubelet[2684]: I0413 20:14:19.819898 2684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4dd764fcda1928dac2cd0ec5caca6497df3d4ede1c476f9ecea9d46d7cc41eb4"} err="failed to get container status \"4dd764fcda1928dac2cd0ec5caca6497df3d4ede1c476f9ecea9d46d7cc41eb4\": rpc error: code = NotFound desc = an error occurred when try to find container \"4dd764fcda1928dac2cd0ec5caca6497df3d4ede1c476f9ecea9d46d7cc41eb4\": not found" Apr 13 20:14:19.820060 kubelet[2684]: I0413 20:14:19.819921 2684 scope.go:117] "RemoveContainer" containerID="a4162b0349c514c1251aff98b48b0adcf3990a7a93574590340fb2f356540a99" Apr 13 20:14:19.820610 kubelet[2684]: E0413 20:14:19.820339 2684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a4162b0349c514c1251aff98b48b0adcf3990a7a93574590340fb2f356540a99\": not found" containerID="a4162b0349c514c1251aff98b48b0adcf3990a7a93574590340fb2f356540a99" Apr 13 20:14:19.820610 kubelet[2684]: I0413 20:14:19.820375 2684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a4162b0349c514c1251aff98b48b0adcf3990a7a93574590340fb2f356540a99"} err="failed to get container status \"a4162b0349c514c1251aff98b48b0adcf3990a7a93574590340fb2f356540a99\": rpc error: code = NotFound desc = an error occurred when try to find container \"a4162b0349c514c1251aff98b48b0adcf3990a7a93574590340fb2f356540a99\": not found" Apr 13 20:14:19.820610 kubelet[2684]: I0413 20:14:19.820395 2684 scope.go:117] "RemoveContainer" containerID="f13851e76c0188bcf471f5de70254c4db69bbb90943a1bac222d4b1f693f5559" Apr 13 20:14:19.820751 containerd[1505]: time="2026-04-13T20:14:19.820183165Z" level=error msg="ContainerStatus for \"a4162b0349c514c1251aff98b48b0adcf3990a7a93574590340fb2f356540a99\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a4162b0349c514c1251aff98b48b0adcf3990a7a93574590340fb2f356540a99\": not found" Apr 13 20:14:19.822036 containerd[1505]: time="2026-04-13T20:14:19.822005225Z" level=info msg="RemoveContainer for \"f13851e76c0188bcf471f5de70254c4db69bbb90943a1bac222d4b1f693f5559\"" Apr 13 20:14:19.823135 kubelet[2684]: I0413 20:14:19.823110 2684 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d7658e91-f05e-4ffb-b887-48a8f6089db3-clustermesh-secrets\") on node \"srv-pcqx3.gb1.brightbox.com\" DevicePath \"\"" Apr 13 20:14:19.823607 kubelet[2684]: I0413 20:14:19.823266 2684 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d7658e91-f05e-4ffb-b887-48a8f6089db3-cilium-cgroup\") on node \"srv-pcqx3.gb1.brightbox.com\" DevicePath \"\"" Apr 13 20:14:19.823607 kubelet[2684]: I0413 20:14:19.823293 2684 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d7658e91-f05e-4ffb-b887-48a8f6089db3-cilium-run\") on node \"srv-pcqx3.gb1.brightbox.com\" DevicePath \"\"" Apr 13 20:14:19.823607 kubelet[2684]: I0413 20:14:19.823322 2684 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rt5mb\" (UniqueName: \"kubernetes.io/projected/d7658e91-f05e-4ffb-b887-48a8f6089db3-kube-api-access-rt5mb\") on node \"srv-pcqx3.gb1.brightbox.com\" DevicePath \"\"" Apr 13 20:14:19.823607 kubelet[2684]: I0413 20:14:19.823342 2684 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hv7q6\" (UniqueName: \"kubernetes.io/projected/04216b05-b39a-4b02-82dd-60f52e548622-kube-api-access-hv7q6\") on node \"srv-pcqx3.gb1.brightbox.com\" DevicePath \"\"" Apr 13 20:14:19.823607 kubelet[2684]: I0413 20:14:19.823358 2684 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d7658e91-f05e-4ffb-b887-48a8f6089db3-hubble-tls\") on node \"srv-pcqx3.gb1.brightbox.com\" DevicePath \"\"" Apr 13 20:14:19.823607 kubelet[2684]: I0413 20:14:19.823373 2684 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d7658e91-f05e-4ffb-b887-48a8f6089db3-hostproc\") on node \"srv-pcqx3.gb1.brightbox.com\" DevicePath \"\"" Apr 13 20:14:19.823607 kubelet[2684]: I0413 20:14:19.823386 2684 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d7658e91-f05e-4ffb-b887-48a8f6089db3-host-proc-sys-net\") on node \"srv-pcqx3.gb1.brightbox.com\" DevicePath \"\"" Apr 13 20:14:19.823607 kubelet[2684]: I0413 20:14:19.823400 2684 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d7658e91-f05e-4ffb-b887-48a8f6089db3-xtables-lock\") on node \"srv-pcqx3.gb1.brightbox.com\" DevicePath \"\"" Apr 13 20:14:19.824037 kubelet[2684]: I0413 20:14:19.823429 2684 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d7658e91-f05e-4ffb-b887-48a8f6089db3-etc-cni-netd\") on node \"srv-pcqx3.gb1.brightbox.com\" DevicePath \"\"" Apr 13 20:14:19.824037 kubelet[2684]: I0413 20:14:19.823445 2684 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/04216b05-b39a-4b02-82dd-60f52e548622-cilium-config-path\") on node \"srv-pcqx3.gb1.brightbox.com\" DevicePath \"\"" Apr 13 20:14:19.824037 kubelet[2684]: I0413 20:14:19.823460 2684 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d7658e91-f05e-4ffb-b887-48a8f6089db3-lib-modules\") on node \"srv-pcqx3.gb1.brightbox.com\" DevicePath \"\"" Apr 13 20:14:19.824037 kubelet[2684]: I0413 20:14:19.823510 2684 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d7658e91-f05e-4ffb-b887-48a8f6089db3-host-proc-sys-kernel\") on node \"srv-pcqx3.gb1.brightbox.com\" DevicePath \"\"" Apr 13 20:14:19.824037 kubelet[2684]: I0413 20:14:19.823534 2684 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d7658e91-f05e-4ffb-b887-48a8f6089db3-cni-path\") on node \"srv-pcqx3.gb1.brightbox.com\" DevicePath \"\"" Apr 13 20:14:19.824037 kubelet[2684]: I0413 20:14:19.823550 2684 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d7658e91-f05e-4ffb-b887-48a8f6089db3-cilium-config-path\") on node \"srv-pcqx3.gb1.brightbox.com\" DevicePath \"\"" Apr 13 20:14:19.824037 kubelet[2684]: I0413 20:14:19.823565 2684 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d7658e91-f05e-4ffb-b887-48a8f6089db3-bpf-maps\") on node \"srv-pcqx3.gb1.brightbox.com\" DevicePath \"\"" Apr 13 20:14:19.831986 containerd[1505]: time="2026-04-13T20:14:19.831943802Z" level=info msg="RemoveContainer for \"f13851e76c0188bcf471f5de70254c4db69bbb90943a1bac222d4b1f693f5559\" returns successfully" Apr 13 20:14:19.832679 kubelet[2684]: I0413 20:14:19.832390 2684 scope.go:117] "RemoveContainer" containerID="f13851e76c0188bcf471f5de70254c4db69bbb90943a1bac222d4b1f693f5559" Apr 13 20:14:19.833049 containerd[1505]: time="2026-04-13T20:14:19.832906989Z" level=error msg="ContainerStatus for \"f13851e76c0188bcf471f5de70254c4db69bbb90943a1bac222d4b1f693f5559\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f13851e76c0188bcf471f5de70254c4db69bbb90943a1bac222d4b1f693f5559\": not found" Apr 13 20:14:19.833168 kubelet[2684]: E0413 20:14:19.833089 2684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f13851e76c0188bcf471f5de70254c4db69bbb90943a1bac222d4b1f693f5559\": not found" containerID="f13851e76c0188bcf471f5de70254c4db69bbb90943a1bac222d4b1f693f5559" Apr 13 20:14:19.833168 kubelet[2684]: I0413 20:14:19.833131 2684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f13851e76c0188bcf471f5de70254c4db69bbb90943a1bac222d4b1f693f5559"} err="failed to get container status \"f13851e76c0188bcf471f5de70254c4db69bbb90943a1bac222d4b1f693f5559\": rpc error: code = NotFound desc = an error occurred when try to find container \"f13851e76c0188bcf471f5de70254c4db69bbb90943a1bac222d4b1f693f5559\": not found" Apr 13 20:14:20.055547 systemd[1]: Removed slice kubepods-burstable-podd7658e91_f05e_4ffb_b887_48a8f6089db3.slice - libcontainer container kubepods-burstable-podd7658e91_f05e_4ffb_b887_48a8f6089db3.slice. Apr 13 20:14:20.055690 systemd[1]: kubepods-burstable-podd7658e91_f05e_4ffb_b887_48a8f6089db3.slice: Consumed 10.536s CPU time. Apr 13 20:14:20.325391 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d52afba6a2ed50a161a074ee2f4d8263d7126395e2fd5188e10f46838112f45-rootfs.mount: Deactivated successfully. Apr 13 20:14:20.325606 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5d52afba6a2ed50a161a074ee2f4d8263d7126395e2fd5188e10f46838112f45-shm.mount: Deactivated successfully. Apr 13 20:14:20.325736 systemd[1]: var-lib-kubelet-pods-04216b05\x2db39a\x2d4b02\x2d82dd\x2d60f52e548622-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhv7q6.mount: Deactivated successfully. Apr 13 20:14:20.325852 systemd[1]: var-lib-kubelet-pods-d7658e91\x2df05e\x2d4ffb\x2db887\x2d48a8f6089db3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drt5mb.mount: Deactivated successfully. Apr 13 20:14:20.325989 systemd[1]: var-lib-kubelet-pods-d7658e91\x2df05e\x2d4ffb\x2db887\x2d48a8f6089db3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 13 20:14:20.326120 systemd[1]: var-lib-kubelet-pods-d7658e91\x2df05e\x2d4ffb\x2db887\x2d48a8f6089db3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 13 20:14:20.441960 kubelet[2684]: E0413 20:14:20.441883 2684 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 20:14:21.206428 sshd[4290]: pam_unix(sshd:session): session closed for user core Apr 13 20:14:21.212630 systemd[1]: sshd@25-10.244.14.202:22-4.175.71.9:57106.service: Deactivated successfully. Apr 13 20:14:21.215159 systemd[1]: session-28.scope: Deactivated successfully. Apr 13 20:14:21.215738 systemd[1]: session-28.scope: Consumed 1.450s CPU time. Apr 13 20:14:21.216638 systemd-logind[1487]: Session 28 logged out. Waiting for processes to exit. Apr 13 20:14:21.218424 systemd-logind[1487]: Removed session 28. Apr 13 20:14:21.239847 systemd[1]: Started sshd@26-10.244.14.202:22-4.175.71.9:57114.service - OpenSSH per-connection server daemon (4.175.71.9:57114). Apr 13 20:14:21.332775 kubelet[2684]: I0413 20:14:21.332286 2684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04216b05-b39a-4b02-82dd-60f52e548622" path="/var/lib/kubelet/pods/04216b05-b39a-4b02-82dd-60f52e548622/volumes" Apr 13 20:14:21.333954 kubelet[2684]: I0413 20:14:21.333912 2684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7658e91-f05e-4ffb-b887-48a8f6089db3" path="/var/lib/kubelet/pods/d7658e91-f05e-4ffb-b887-48a8f6089db3/volumes" Apr 13 20:14:21.379647 sshd[4453]: Accepted publickey for core from 4.175.71.9 port 57114 ssh2: RSA SHA256:gQAv84QRWsNzQzQGG1TKteeG+h41qyFSg3i58ChsR9o Apr 13 20:14:21.382494 sshd[4453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:14:21.390264 systemd-logind[1487]: New session 29 of user core. Apr 13 20:14:21.402711 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 13 20:14:22.439115 sshd[4453]: pam_unix(sshd:session): session closed for user core Apr 13 20:14:22.446689 systemd[1]: sshd@26-10.244.14.202:22-4.175.71.9:57114.service: Deactivated successfully. Apr 13 20:14:22.453042 systemd[1]: session-29.scope: Deactivated successfully. Apr 13 20:14:22.455589 systemd-logind[1487]: Session 29 logged out. Waiting for processes to exit. Apr 13 20:14:22.486994 systemd[1]: Started sshd@27-10.244.14.202:22-4.175.71.9:57130.service - OpenSSH per-connection server daemon (4.175.71.9:57130). Apr 13 20:14:22.493572 systemd-logind[1487]: Removed session 29. Apr 13 20:14:22.558171 systemd[1]: Created slice kubepods-burstable-pod331af24f_c271_4af6_96c9_2a22e95336e8.slice - libcontainer container kubepods-burstable-pod331af24f_c271_4af6_96c9_2a22e95336e8.slice. Apr 13 20:14:22.651372 kubelet[2684]: I0413 20:14:22.651288 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/331af24f-c271-4af6-96c9-2a22e95336e8-cilium-ipsec-secrets\") pod \"cilium-tp49v\" (UID: \"331af24f-c271-4af6-96c9-2a22e95336e8\") " pod="kube-system/cilium-tp49v" Apr 13 20:14:22.651372 kubelet[2684]: I0413 20:14:22.651377 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/331af24f-c271-4af6-96c9-2a22e95336e8-cilium-config-path\") pod \"cilium-tp49v\" (UID: \"331af24f-c271-4af6-96c9-2a22e95336e8\") " pod="kube-system/cilium-tp49v" Apr 13 20:14:22.652015 kubelet[2684]: I0413 20:14:22.651432 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/331af24f-c271-4af6-96c9-2a22e95336e8-clustermesh-secrets\") pod \"cilium-tp49v\" (UID: \"331af24f-c271-4af6-96c9-2a22e95336e8\") " pod="kube-system/cilium-tp49v" Apr 13 20:14:22.652015 kubelet[2684]: I0413 20:14:22.651462 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/331af24f-c271-4af6-96c9-2a22e95336e8-hubble-tls\") pod \"cilium-tp49v\" (UID: \"331af24f-c271-4af6-96c9-2a22e95336e8\") " pod="kube-system/cilium-tp49v" Apr 13 20:14:22.652015 kubelet[2684]: I0413 20:14:22.651524 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/331af24f-c271-4af6-96c9-2a22e95336e8-cilium-run\") pod \"cilium-tp49v\" (UID: \"331af24f-c271-4af6-96c9-2a22e95336e8\") " pod="kube-system/cilium-tp49v" Apr 13 20:14:22.652015 kubelet[2684]: I0413 20:14:22.651569 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/331af24f-c271-4af6-96c9-2a22e95336e8-bpf-maps\") pod \"cilium-tp49v\" (UID: \"331af24f-c271-4af6-96c9-2a22e95336e8\") " pod="kube-system/cilium-tp49v" Apr 13 20:14:22.652015 kubelet[2684]: I0413 20:14:22.651607 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4m5l\" (UniqueName: \"kubernetes.io/projected/331af24f-c271-4af6-96c9-2a22e95336e8-kube-api-access-w4m5l\") pod \"cilium-tp49v\" (UID: \"331af24f-c271-4af6-96c9-2a22e95336e8\") " pod="kube-system/cilium-tp49v" Apr 13 20:14:22.652015 kubelet[2684]: I0413 20:14:22.651637 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/331af24f-c271-4af6-96c9-2a22e95336e8-cilium-cgroup\") pod \"cilium-tp49v\" (UID: \"331af24f-c271-4af6-96c9-2a22e95336e8\") " pod="kube-system/cilium-tp49v" Apr 13 20:14:22.652311 kubelet[2684]: I0413 20:14:22.651672 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/331af24f-c271-4af6-96c9-2a22e95336e8-host-proc-sys-net\") pod \"cilium-tp49v\" (UID: \"331af24f-c271-4af6-96c9-2a22e95336e8\") " pod="kube-system/cilium-tp49v" Apr 13 20:14:22.652311 kubelet[2684]: I0413 20:14:22.651700 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/331af24f-c271-4af6-96c9-2a22e95336e8-host-proc-sys-kernel\") pod \"cilium-tp49v\" (UID: \"331af24f-c271-4af6-96c9-2a22e95336e8\") " pod="kube-system/cilium-tp49v" Apr 13 20:14:22.652311 kubelet[2684]: I0413 20:14:22.651739 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/331af24f-c271-4af6-96c9-2a22e95336e8-hostproc\") pod \"cilium-tp49v\" (UID: \"331af24f-c271-4af6-96c9-2a22e95336e8\") " pod="kube-system/cilium-tp49v" Apr 13 20:14:22.652311 kubelet[2684]: I0413 20:14:22.651767 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/331af24f-c271-4af6-96c9-2a22e95336e8-cni-path\") pod \"cilium-tp49v\" (UID: \"331af24f-c271-4af6-96c9-2a22e95336e8\") " pod="kube-system/cilium-tp49v" Apr 13 20:14:22.652311 kubelet[2684]: I0413 20:14:22.651801 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/331af24f-c271-4af6-96c9-2a22e95336e8-lib-modules\") pod \"cilium-tp49v\" (UID: \"331af24f-c271-4af6-96c9-2a22e95336e8\") " pod="kube-system/cilium-tp49v" Apr 13 20:14:22.652311 kubelet[2684]: I0413 20:14:22.651829 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/331af24f-c271-4af6-96c9-2a22e95336e8-etc-cni-netd\") pod \"cilium-tp49v\" (UID: \"331af24f-c271-4af6-96c9-2a22e95336e8\") " pod="kube-system/cilium-tp49v" Apr 13 20:14:22.652637 kubelet[2684]: I0413 20:14:22.651860 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/331af24f-c271-4af6-96c9-2a22e95336e8-xtables-lock\") pod \"cilium-tp49v\" (UID: \"331af24f-c271-4af6-96c9-2a22e95336e8\") " pod="kube-system/cilium-tp49v" Apr 13 20:14:22.654421 sshd[4465]: Accepted publickey for core from 4.175.71.9 port 57130 ssh2: RSA SHA256:gQAv84QRWsNzQzQGG1TKteeG+h41qyFSg3i58ChsR9o Apr 13 20:14:22.655984 sshd[4465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:14:22.662802 systemd-logind[1487]: New session 30 of user core. Apr 13 20:14:22.674919 systemd[1]: Started session-30.scope - Session 30 of User core. Apr 13 20:14:22.783185 sshd[4465]: pam_unix(sshd:session): session closed for user core Apr 13 20:14:22.818129 systemd[1]: sshd@27-10.244.14.202:22-4.175.71.9:57130.service: Deactivated successfully. Apr 13 20:14:22.821428 systemd[1]: session-30.scope: Deactivated successfully. Apr 13 20:14:22.823128 systemd-logind[1487]: Session 30 logged out. Waiting for processes to exit. Apr 13 20:14:22.833946 systemd[1]: Started sshd@28-10.244.14.202:22-4.175.71.9:57132.service - OpenSSH per-connection server daemon (4.175.71.9:57132). Apr 13 20:14:22.836661 systemd-logind[1487]: Removed session 30. Apr 13 20:14:22.867263 containerd[1505]: time="2026-04-13T20:14:22.867152209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tp49v,Uid:331af24f-c271-4af6-96c9-2a22e95336e8,Namespace:kube-system,Attempt:0,}" Apr 13 20:14:22.913702 containerd[1505]: time="2026-04-13T20:14:22.913526249Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:14:22.913702 containerd[1505]: time="2026-04-13T20:14:22.913630439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:14:22.914259 containerd[1505]: time="2026-04-13T20:14:22.913660661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:14:22.914259 containerd[1505]: time="2026-04-13T20:14:22.913819214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:14:22.950704 systemd[1]: Started cri-containerd-075e9811fcf561083c5f83d8340984a03705dd45f56fb44619cee9fbaf3f662d.scope - libcontainer container 075e9811fcf561083c5f83d8340984a03705dd45f56fb44619cee9fbaf3f662d. Apr 13 20:14:22.968331 sshd[4477]: Accepted publickey for core from 4.175.71.9 port 57132 ssh2: RSA SHA256:gQAv84QRWsNzQzQGG1TKteeG+h41qyFSg3i58ChsR9o Apr 13 20:14:22.971329 sshd[4477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:14:22.983428 systemd-logind[1487]: New session 31 of user core. Apr 13 20:14:22.989664 systemd[1]: Started session-31.scope - Session 31 of User core. Apr 13 20:14:23.005528 containerd[1505]: time="2026-04-13T20:14:23.005452776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tp49v,Uid:331af24f-c271-4af6-96c9-2a22e95336e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"075e9811fcf561083c5f83d8340984a03705dd45f56fb44619cee9fbaf3f662d\"" Apr 13 20:14:23.013055 containerd[1505]: time="2026-04-13T20:14:23.012987500Z" level=info msg="CreateContainer within sandbox \"075e9811fcf561083c5f83d8340984a03705dd45f56fb44619cee9fbaf3f662d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 13 20:14:23.024139 containerd[1505]: time="2026-04-13T20:14:23.024001021Z" level=info msg="CreateContainer within sandbox \"075e9811fcf561083c5f83d8340984a03705dd45f56fb44619cee9fbaf3f662d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bda5ded9352ee0ada95bbc9f168f15b4feb46a19a448605219312d50fd1be3ab\"" Apr 13 20:14:23.025643 containerd[1505]: time="2026-04-13T20:14:23.025575497Z" level=info msg="StartContainer for \"bda5ded9352ee0ada95bbc9f168f15b4feb46a19a448605219312d50fd1be3ab\"" Apr 13 20:14:23.073763 systemd[1]: Started cri-containerd-bda5ded9352ee0ada95bbc9f168f15b4feb46a19a448605219312d50fd1be3ab.scope - libcontainer container bda5ded9352ee0ada95bbc9f168f15b4feb46a19a448605219312d50fd1be3ab. Apr 13 20:14:23.123182 containerd[1505]: time="2026-04-13T20:14:23.123113577Z" level=info msg="StartContainer for \"bda5ded9352ee0ada95bbc9f168f15b4feb46a19a448605219312d50fd1be3ab\" returns successfully" Apr 13 20:14:23.152803 systemd[1]: cri-containerd-bda5ded9352ee0ada95bbc9f168f15b4feb46a19a448605219312d50fd1be3ab.scope: Deactivated successfully. Apr 13 20:14:23.207033 containerd[1505]: time="2026-04-13T20:14:23.206869690Z" level=info msg="shim disconnected" id=bda5ded9352ee0ada95bbc9f168f15b4feb46a19a448605219312d50fd1be3ab namespace=k8s.io Apr 13 20:14:23.207033 containerd[1505]: time="2026-04-13T20:14:23.207017236Z" level=warning msg="cleaning up after shim disconnected" id=bda5ded9352ee0ada95bbc9f168f15b4feb46a19a448605219312d50fd1be3ab namespace=k8s.io Apr 13 20:14:23.207033 containerd[1505]: time="2026-04-13T20:14:23.207040303Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:14:23.793327 containerd[1505]: time="2026-04-13T20:14:23.793234547Z" level=info msg="CreateContainer within sandbox \"075e9811fcf561083c5f83d8340984a03705dd45f56fb44619cee9fbaf3f662d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 13 20:14:23.811099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3347279668.mount: Deactivated successfully. Apr 13 20:14:23.813522 containerd[1505]: time="2026-04-13T20:14:23.811865411Z" level=info msg="CreateContainer within sandbox \"075e9811fcf561083c5f83d8340984a03705dd45f56fb44619cee9fbaf3f662d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e5c1e6b92fdf86064c013a50fe320cbb5c8d16eab522620c3e974536826ffd9e\"" Apr 13 20:14:23.814193 containerd[1505]: time="2026-04-13T20:14:23.813979833Z" level=info msg="StartContainer for \"e5c1e6b92fdf86064c013a50fe320cbb5c8d16eab522620c3e974536826ffd9e\"" Apr 13 20:14:23.862696 systemd[1]: Started cri-containerd-e5c1e6b92fdf86064c013a50fe320cbb5c8d16eab522620c3e974536826ffd9e.scope - libcontainer container e5c1e6b92fdf86064c013a50fe320cbb5c8d16eab522620c3e974536826ffd9e. Apr 13 20:14:23.921724 containerd[1505]: time="2026-04-13T20:14:23.921637313Z" level=info msg="StartContainer for \"e5c1e6b92fdf86064c013a50fe320cbb5c8d16eab522620c3e974536826ffd9e\" returns successfully" Apr 13 20:14:23.937390 systemd[1]: cri-containerd-e5c1e6b92fdf86064c013a50fe320cbb5c8d16eab522620c3e974536826ffd9e.scope: Deactivated successfully. Apr 13 20:14:23.995774 containerd[1505]: time="2026-04-13T20:14:23.995658348Z" level=info msg="shim disconnected" id=e5c1e6b92fdf86064c013a50fe320cbb5c8d16eab522620c3e974536826ffd9e namespace=k8s.io Apr 13 20:14:23.995774 containerd[1505]: time="2026-04-13T20:14:23.995747331Z" level=warning msg="cleaning up after shim disconnected" id=e5c1e6b92fdf86064c013a50fe320cbb5c8d16eab522620c3e974536826ffd9e namespace=k8s.io Apr 13 20:14:23.995774 containerd[1505]: time="2026-04-13T20:14:23.995765428Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:14:24.016986 containerd[1505]: time="2026-04-13T20:14:24.016871732Z" level=warning msg="cleanup warnings time=\"2026-04-13T20:14:24Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 13 20:14:24.774233 systemd[1]: run-containerd-runc-k8s.io-e5c1e6b92fdf86064c013a50fe320cbb5c8d16eab522620c3e974536826ffd9e-runc.8LgWtz.mount: Deactivated successfully. Apr 13 20:14:24.774422 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5c1e6b92fdf86064c013a50fe320cbb5c8d16eab522620c3e974536826ffd9e-rootfs.mount: Deactivated successfully. Apr 13 20:14:24.811440 containerd[1505]: time="2026-04-13T20:14:24.810759470Z" level=info msg="CreateContainer within sandbox \"075e9811fcf561083c5f83d8340984a03705dd45f56fb44619cee9fbaf3f662d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 13 20:14:24.833414 containerd[1505]: time="2026-04-13T20:14:24.833202201Z" level=info msg="CreateContainer within sandbox \"075e9811fcf561083c5f83d8340984a03705dd45f56fb44619cee9fbaf3f662d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cccc7abd83750904dada36d4cfee006d920501654ce056d0c128cc89bacee59e\"" Apr 13 20:14:24.834063 containerd[1505]: time="2026-04-13T20:14:24.834022021Z" level=info msg="StartContainer for \"cccc7abd83750904dada36d4cfee006d920501654ce056d0c128cc89bacee59e\"" Apr 13 20:14:24.883430 systemd[1]: Started cri-containerd-cccc7abd83750904dada36d4cfee006d920501654ce056d0c128cc89bacee59e.scope - libcontainer container cccc7abd83750904dada36d4cfee006d920501654ce056d0c128cc89bacee59e. Apr 13 20:14:24.921123 containerd[1505]: time="2026-04-13T20:14:24.920918376Z" level=info msg="StartContainer for \"cccc7abd83750904dada36d4cfee006d920501654ce056d0c128cc89bacee59e\" returns successfully" Apr 13 20:14:24.930248 systemd[1]: cri-containerd-cccc7abd83750904dada36d4cfee006d920501654ce056d0c128cc89bacee59e.scope: Deactivated successfully. Apr 13 20:14:24.976064 containerd[1505]: time="2026-04-13T20:14:24.975801627Z" level=info msg="shim disconnected" id=cccc7abd83750904dada36d4cfee006d920501654ce056d0c128cc89bacee59e namespace=k8s.io Apr 13 20:14:24.976064 containerd[1505]: time="2026-04-13T20:14:24.975872106Z" level=warning msg="cleaning up after shim disconnected" id=cccc7abd83750904dada36d4cfee006d920501654ce056d0c128cc89bacee59e namespace=k8s.io Apr 13 20:14:24.976064 containerd[1505]: time="2026-04-13T20:14:24.975887876Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:14:25.258844 containerd[1505]: time="2026-04-13T20:14:25.258678185Z" level=info msg="StopPodSandbox for \"5d52afba6a2ed50a161a074ee2f4d8263d7126395e2fd5188e10f46838112f45\"" Apr 13 20:14:25.259422 containerd[1505]: time="2026-04-13T20:14:25.258838502Z" level=info msg="TearDown network for sandbox \"5d52afba6a2ed50a161a074ee2f4d8263d7126395e2fd5188e10f46838112f45\" successfully" Apr 13 20:14:25.259422 containerd[1505]: time="2026-04-13T20:14:25.258861620Z" level=info msg="StopPodSandbox for \"5d52afba6a2ed50a161a074ee2f4d8263d7126395e2fd5188e10f46838112f45\" returns successfully" Apr 13 20:14:25.260601 containerd[1505]: time="2026-04-13T20:14:25.260558900Z" level=info msg="RemovePodSandbox for \"5d52afba6a2ed50a161a074ee2f4d8263d7126395e2fd5188e10f46838112f45\"" Apr 13 20:14:25.260690 containerd[1505]: time="2026-04-13T20:14:25.260617073Z" level=info msg="Forcibly stopping sandbox \"5d52afba6a2ed50a161a074ee2f4d8263d7126395e2fd5188e10f46838112f45\"" Apr 13 20:14:25.260747 containerd[1505]: time="2026-04-13T20:14:25.260690553Z" level=info msg="TearDown network for sandbox \"5d52afba6a2ed50a161a074ee2f4d8263d7126395e2fd5188e10f46838112f45\" successfully" Apr 13 20:14:25.274504 containerd[1505]: time="2026-04-13T20:14:25.274428383Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5d52afba6a2ed50a161a074ee2f4d8263d7126395e2fd5188e10f46838112f45\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:14:25.274683 containerd[1505]: time="2026-04-13T20:14:25.274534938Z" level=info msg="RemovePodSandbox \"5d52afba6a2ed50a161a074ee2f4d8263d7126395e2fd5188e10f46838112f45\" returns successfully" Apr 13 20:14:25.276898 containerd[1505]: time="2026-04-13T20:14:25.276424575Z" level=info msg="StopPodSandbox for \"8abf037fe15edb0adc10cf852339384ee54f46553ff91364734008c6cd2456f7\"" Apr 13 20:14:25.276898 containerd[1505]: time="2026-04-13T20:14:25.276627560Z" level=info msg="TearDown network for sandbox \"8abf037fe15edb0adc10cf852339384ee54f46553ff91364734008c6cd2456f7\" successfully" Apr 13 20:14:25.276898 containerd[1505]: time="2026-04-13T20:14:25.276650357Z" level=info msg="StopPodSandbox for \"8abf037fe15edb0adc10cf852339384ee54f46553ff91364734008c6cd2456f7\" returns successfully" Apr 13 20:14:25.277066 containerd[1505]: time="2026-04-13T20:14:25.277018153Z" level=info msg="RemovePodSandbox for \"8abf037fe15edb0adc10cf852339384ee54f46553ff91364734008c6cd2456f7\"" Apr 13 20:14:25.277066 containerd[1505]: time="2026-04-13T20:14:25.277052466Z" level=info msg="Forcibly stopping sandbox \"8abf037fe15edb0adc10cf852339384ee54f46553ff91364734008c6cd2456f7\"" Apr 13 20:14:25.277175 containerd[1505]: time="2026-04-13T20:14:25.277128102Z" level=info msg="TearDown network for sandbox \"8abf037fe15edb0adc10cf852339384ee54f46553ff91364734008c6cd2456f7\" successfully" Apr 13 20:14:25.281349 containerd[1505]: time="2026-04-13T20:14:25.281288071Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8abf037fe15edb0adc10cf852339384ee54f46553ff91364734008c6cd2456f7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:14:25.281970 containerd[1505]: time="2026-04-13T20:14:25.281357683Z" level=info msg="RemovePodSandbox \"8abf037fe15edb0adc10cf852339384ee54f46553ff91364734008c6cd2456f7\" returns successfully" Apr 13 20:14:25.443680 kubelet[2684]: E0413 20:14:25.443536 2684 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 20:14:25.774883 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cccc7abd83750904dada36d4cfee006d920501654ce056d0c128cc89bacee59e-rootfs.mount: Deactivated successfully. Apr 13 20:14:25.807884 containerd[1505]: time="2026-04-13T20:14:25.807822251Z" level=info msg="CreateContainer within sandbox \"075e9811fcf561083c5f83d8340984a03705dd45f56fb44619cee9fbaf3f662d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 13 20:14:25.837399 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1542215102.mount: Deactivated successfully. Apr 13 20:14:25.841982 containerd[1505]: time="2026-04-13T20:14:25.841822795Z" level=info msg="CreateContainer within sandbox \"075e9811fcf561083c5f83d8340984a03705dd45f56fb44619cee9fbaf3f662d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cb998795aca1027dc9e541654536b0443189ca74d71e80e87a9148e85bbf7737\"" Apr 13 20:14:25.843307 containerd[1505]: time="2026-04-13T20:14:25.842636730Z" level=info msg="StartContainer for \"cb998795aca1027dc9e541654536b0443189ca74d71e80e87a9148e85bbf7737\"" Apr 13 20:14:25.902714 systemd[1]: Started cri-containerd-cb998795aca1027dc9e541654536b0443189ca74d71e80e87a9148e85bbf7737.scope - libcontainer container cb998795aca1027dc9e541654536b0443189ca74d71e80e87a9148e85bbf7737. Apr 13 20:14:26.003805 containerd[1505]: time="2026-04-13T20:14:26.003752646Z" level=info msg="StartContainer for \"cb998795aca1027dc9e541654536b0443189ca74d71e80e87a9148e85bbf7737\" returns successfully" Apr 13 20:14:26.004030 systemd[1]: cri-containerd-cb998795aca1027dc9e541654536b0443189ca74d71e80e87a9148e85bbf7737.scope: Deactivated successfully. Apr 13 20:14:26.071874 containerd[1505]: time="2026-04-13T20:14:26.071488574Z" level=info msg="shim disconnected" id=cb998795aca1027dc9e541654536b0443189ca74d71e80e87a9148e85bbf7737 namespace=k8s.io Apr 13 20:14:26.071874 containerd[1505]: time="2026-04-13T20:14:26.071562750Z" level=warning msg="cleaning up after shim disconnected" id=cb998795aca1027dc9e541654536b0443189ca74d71e80e87a9148e85bbf7737 namespace=k8s.io Apr 13 20:14:26.071874 containerd[1505]: time="2026-04-13T20:14:26.071578585Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:14:26.774326 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb998795aca1027dc9e541654536b0443189ca74d71e80e87a9148e85bbf7737-rootfs.mount: Deactivated successfully. Apr 13 20:14:26.814761 containerd[1505]: time="2026-04-13T20:14:26.814570600Z" level=info msg="CreateContainer within sandbox \"075e9811fcf561083c5f83d8340984a03705dd45f56fb44619cee9fbaf3f662d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 13 20:14:26.852521 containerd[1505]: time="2026-04-13T20:14:26.851775853Z" level=info msg="CreateContainer within sandbox \"075e9811fcf561083c5f83d8340984a03705dd45f56fb44619cee9fbaf3f662d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6fab70d21d9b5d4338977b3cbfa6d24514f80686df21908158797c72410830b5\"" Apr 13 20:14:26.854357 containerd[1505]: time="2026-04-13T20:14:26.852971030Z" level=info msg="StartContainer for \"6fab70d21d9b5d4338977b3cbfa6d24514f80686df21908158797c72410830b5\"" Apr 13 20:14:26.892055 systemd[1]: Started cri-containerd-6fab70d21d9b5d4338977b3cbfa6d24514f80686df21908158797c72410830b5.scope - libcontainer container 6fab70d21d9b5d4338977b3cbfa6d24514f80686df21908158797c72410830b5. Apr 13 20:14:26.939506 containerd[1505]: time="2026-04-13T20:14:26.938767364Z" level=info msg="StartContainer for \"6fab70d21d9b5d4338977b3cbfa6d24514f80686df21908158797c72410830b5\" returns successfully" Apr 13 20:14:27.732535 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 13 20:14:27.838335 kubelet[2684]: I0413 20:14:27.837744 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tp49v" podStartSLOduration=5.837709875 podStartE2EDuration="5.837709875s" podCreationTimestamp="2026-04-13 20:14:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:14:27.834542942 +0000 UTC m=+122.771158288" watchObservedRunningTime="2026-04-13 20:14:27.837709875 +0000 UTC m=+122.774325199" Apr 13 20:14:28.653362 kubelet[2684]: I0413 20:14:28.652879 2684 setters.go:618] "Node became not ready" node="srv-pcqx3.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-13T20:14:28Z","lastTransitionTime":"2026-04-13T20:14:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 13 20:14:31.645209 systemd-networkd[1414]: lxc_health: Link UP Apr 13 20:14:31.654587 systemd-networkd[1414]: lxc_health: Gained carrier Apr 13 20:14:33.316149 systemd-networkd[1414]: lxc_health: Gained IPv6LL Apr 13 20:14:34.508031 systemd[1]: run-containerd-runc-k8s.io-6fab70d21d9b5d4338977b3cbfa6d24514f80686df21908158797c72410830b5-runc.gnqEIL.mount: Deactivated successfully. Apr 13 20:14:34.593651 kubelet[2684]: E0413 20:14:34.593382 2684 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:60172->127.0.0.1:35047: write tcp 127.0.0.1:60172->127.0.0.1:35047: write: broken pipe Apr 13 20:14:36.876755 sshd[4477]: pam_unix(sshd:session): session closed for user core Apr 13 20:14:36.887621 systemd[1]: sshd@28-10.244.14.202:22-4.175.71.9:57132.service: Deactivated successfully. Apr 13 20:14:36.893854 systemd[1]: session-31.scope: Deactivated successfully. Apr 13 20:14:36.897811 systemd-logind[1487]: Session 31 logged out. Waiting for processes to exit. Apr 13 20:14:36.900704 systemd-logind[1487]: Removed session 31.