Mar 13 00:34:52.853770 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Mar 12 22:08:29 -00 2026 Mar 13 00:34:52.853787 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:34:52.853794 kernel: BIOS-provided physical RAM map: Mar 13 00:34:52.853799 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 13 00:34:52.853806 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ed3efff] usable Mar 13 00:34:52.853811 kernel: BIOS-e820: [mem 0x000000007ed3f000-0x000000007edfffff] reserved Mar 13 00:34:52.853816 kernel: BIOS-e820: [mem 0x000000007ee00000-0x000000007f8ecfff] usable Mar 13 00:34:52.853821 kernel: BIOS-e820: [mem 0x000000007f8ed000-0x000000007fb6cfff] reserved Mar 13 00:34:52.853825 kernel: BIOS-e820: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Mar 13 00:34:52.853830 kernel: BIOS-e820: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Mar 13 00:34:52.853835 kernel: BIOS-e820: [mem 0x000000007fbff000-0x000000007ff7bfff] usable Mar 13 00:34:52.853839 kernel: BIOS-e820: [mem 0x000000007ff7c000-0x000000007fffffff] reserved Mar 13 00:34:52.853844 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 13 00:34:52.853851 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 13 00:34:52.853856 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Mar 13 00:34:52.853861 kernel: BIOS-e820: [mem 0x0000000100000000-0x0000000179ffffff] usable Mar 13 00:34:52.853866 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 13 00:34:52.853871 kernel: NX (Execute Disable) protection: active Mar 13 00:34:52.853878 kernel: APIC: Static calls initialized Mar 13 00:34:52.853883 kernel: e820: update [mem 0x7dfab018-0x7dfb4a57] usable ==> usable Mar 13 00:34:52.853888 kernel: e820: update [mem 0x7df6f018-0x7dfaa657] usable ==> usable Mar 13 00:34:52.853892 kernel: e820: update [mem 0x7dc01018-0x7dc3c657] usable ==> usable Mar 13 00:34:52.853897 kernel: extended physical RAM map: Mar 13 00:34:52.853902 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 13 00:34:52.853907 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000007dc01017] usable Mar 13 00:34:52.853912 kernel: reserve setup_data: [mem 0x000000007dc01018-0x000000007dc3c657] usable Mar 13 00:34:52.853916 kernel: reserve setup_data: [mem 0x000000007dc3c658-0x000000007df6f017] usable Mar 13 00:34:52.853921 kernel: reserve setup_data: [mem 0x000000007df6f018-0x000000007dfaa657] usable Mar 13 00:34:52.853926 kernel: reserve setup_data: [mem 0x000000007dfaa658-0x000000007dfab017] usable Mar 13 00:34:52.853936 kernel: reserve setup_data: [mem 0x000000007dfab018-0x000000007dfb4a57] usable Mar 13 00:34:52.853942 kernel: reserve setup_data: [mem 0x000000007dfb4a58-0x000000007ed3efff] usable Mar 13 00:34:52.853949 kernel: reserve setup_data: [mem 0x000000007ed3f000-0x000000007edfffff] reserved Mar 13 00:34:52.853956 kernel: reserve setup_data: [mem 0x000000007ee00000-0x000000007f8ecfff] usable Mar 13 00:34:52.853963 kernel: reserve setup_data: [mem 0x000000007f8ed000-0x000000007fb6cfff] reserved Mar 13 00:34:52.853970 kernel: reserve setup_data: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Mar 13 00:34:52.853974 kernel: reserve setup_data: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Mar 13 00:34:52.853979 kernel: reserve setup_data: [mem 0x000000007fbff000-0x000000007ff7bfff] usable Mar 13 00:34:52.853984 kernel: reserve setup_data: [mem 0x000000007ff7c000-0x000000007fffffff] reserved Mar 13 00:34:52.853989 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 13 00:34:52.853994 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 13 00:34:52.854004 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Mar 13 00:34:52.854009 kernel: reserve setup_data: [mem 0x0000000100000000-0x0000000179ffffff] usable Mar 13 00:34:52.854014 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 13 00:34:52.854019 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Mar 13 00:34:52.854025 kernel: efi: SMBIOS=0x7f988000 SMBIOS 3.0=0x7f986000 ACPI=0x7fb7e000 ACPI 2.0=0x7fb7e014 MEMATTR=0x7e01b198 RNG=0x7fb73018 Mar 13 00:34:52.854032 kernel: random: crng init done Mar 13 00:34:52.854037 kernel: efi: Remove mem137: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Mar 13 00:34:52.854042 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Mar 13 00:34:52.854047 kernel: secureboot: Secure boot disabled Mar 13 00:34:52.854052 kernel: SMBIOS 3.0.0 present. Mar 13 00:34:52.854057 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Mar 13 00:34:52.854062 kernel: DMI: Memory slots populated: 1/1 Mar 13 00:34:52.854067 kernel: Hypervisor detected: KVM Mar 13 00:34:52.854072 kernel: last_pfn = 0x7ff7c max_arch_pfn = 0x10000000000 Mar 13 00:34:52.854077 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 13 00:34:52.854083 kernel: kvm-clock: using sched offset of 13393444924 cycles Mar 13 00:34:52.854090 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 13 00:34:52.854095 kernel: tsc: Detected 2400.000 MHz processor Mar 13 00:34:52.854101 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 13 00:34:52.854106 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 13 00:34:52.854111 kernel: last_pfn = 0x17a000 max_arch_pfn = 0x10000000000 Mar 13 00:34:52.854117 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 13 00:34:52.854122 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 13 00:34:52.854127 kernel: last_pfn = 0x7ff7c max_arch_pfn = 0x10000000000 Mar 13 00:34:52.854135 kernel: Using GB pages for direct mapping Mar 13 00:34:52.854172 kernel: ACPI: Early table checksum verification disabled Mar 13 00:34:52.857553 kernel: ACPI: RSDP 0x000000007FB7E014 000024 (v02 BOCHS ) Mar 13 00:34:52.857561 kernel: ACPI: XSDT 0x000000007FB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 13 00:34:52.857567 kernel: ACPI: FACP 0x000000007FB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:34:52.857572 kernel: ACPI: DSDT 0x000000007FB7A000 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:34:52.857577 kernel: ACPI: FACS 0x000000007FBDD000 000040 Mar 13 00:34:52.857583 kernel: ACPI: APIC 0x000000007FB78000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:34:52.857588 kernel: ACPI: HPET 0x000000007FB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:34:52.857594 kernel: ACPI: MCFG 0x000000007FB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:34:52.857606 kernel: ACPI: WAET 0x000000007FB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:34:52.857612 kernel: ACPI: BGRT 0x000000007FB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 13 00:34:52.857617 kernel: ACPI: Reserving FACP table memory at [mem 0x7fb79000-0x7fb790f3] Mar 13 00:34:52.857622 kernel: ACPI: Reserving DSDT table memory at [mem 0x7fb7a000-0x7fb7c442] Mar 13 00:34:52.857627 kernel: ACPI: Reserving FACS table memory at [mem 0x7fbdd000-0x7fbdd03f] Mar 13 00:34:52.857632 kernel: ACPI: Reserving APIC table memory at [mem 0x7fb78000-0x7fb7807f] Mar 13 00:34:52.857638 kernel: ACPI: Reserving HPET table memory at [mem 0x7fb77000-0x7fb77037] Mar 13 00:34:52.857643 kernel: ACPI: Reserving MCFG table memory at [mem 0x7fb76000-0x7fb7603b] Mar 13 00:34:52.857648 kernel: ACPI: Reserving WAET table memory at [mem 0x7fb75000-0x7fb75027] Mar 13 00:34:52.857656 kernel: ACPI: Reserving BGRT table memory at [mem 0x7fb74000-0x7fb74037] Mar 13 00:34:52.857661 kernel: No NUMA configuration found Mar 13 00:34:52.857667 kernel: Faking a node at [mem 0x0000000000000000-0x0000000179ffffff] Mar 13 00:34:52.857672 kernel: NODE_DATA(0) allocated [mem 0x179ff6dc0-0x179ffdfff] Mar 13 00:34:52.857678 kernel: Zone ranges: Mar 13 00:34:52.857683 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 13 00:34:52.857688 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 13 00:34:52.857694 kernel: Normal [mem 0x0000000100000000-0x0000000179ffffff] Mar 13 00:34:52.857699 kernel: Device empty Mar 13 00:34:52.857706 kernel: Movable zone start for each node Mar 13 00:34:52.857711 kernel: Early memory node ranges Mar 13 00:34:52.857716 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 13 00:34:52.857722 kernel: node 0: [mem 0x0000000000100000-0x000000007ed3efff] Mar 13 00:34:52.857727 kernel: node 0: [mem 0x000000007ee00000-0x000000007f8ecfff] Mar 13 00:34:52.857732 kernel: node 0: [mem 0x000000007fbff000-0x000000007ff7bfff] Mar 13 00:34:52.857737 kernel: node 0: [mem 0x0000000100000000-0x0000000179ffffff] Mar 13 00:34:52.857742 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x0000000179ffffff] Mar 13 00:34:52.857748 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 13 00:34:52.857755 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 13 00:34:52.857760 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Mar 13 00:34:52.857765 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Mar 13 00:34:52.857771 kernel: On node 0, zone Normal: 132 pages in unavailable ranges Mar 13 00:34:52.857776 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Mar 13 00:34:52.857781 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 13 00:34:52.857786 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 13 00:34:52.857792 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 13 00:34:52.857797 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 13 00:34:52.857802 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 13 00:34:52.857810 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 13 00:34:52.857815 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 13 00:34:52.857820 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 13 00:34:52.857825 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 13 00:34:52.857830 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 13 00:34:52.857836 kernel: CPU topo: Max. logical packages: 1 Mar 13 00:34:52.857841 kernel: CPU topo: Max. logical dies: 1 Mar 13 00:34:52.857854 kernel: CPU topo: Max. dies per package: 1 Mar 13 00:34:52.857860 kernel: CPU topo: Max. threads per core: 1 Mar 13 00:34:52.857866 kernel: CPU topo: Num. cores per package: 2 Mar 13 00:34:52.857871 kernel: CPU topo: Num. threads per package: 2 Mar 13 00:34:52.857879 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Mar 13 00:34:52.857884 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 13 00:34:52.857890 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Mar 13 00:34:52.857895 kernel: Booting paravirtualized kernel on KVM Mar 13 00:34:52.857901 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 13 00:34:52.857908 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 13 00:34:52.857914 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Mar 13 00:34:52.857919 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Mar 13 00:34:52.857925 kernel: pcpu-alloc: [0] 0 1 Mar 13 00:34:52.857930 kernel: kvm-guest: PV spinlocks disabled, no host support Mar 13 00:34:52.857936 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:34:52.857941 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 13 00:34:52.857947 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 13 00:34:52.857954 kernel: Fallback order for Node 0: 0 Mar 13 00:34:52.857960 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1022792 Mar 13 00:34:52.857965 kernel: Policy zone: Normal Mar 13 00:34:52.857970 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 13 00:34:52.857976 kernel: software IO TLB: area num 2. Mar 13 00:34:52.857981 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 13 00:34:52.857987 kernel: ftrace: allocating 40099 entries in 157 pages Mar 13 00:34:52.857992 kernel: ftrace: allocated 157 pages with 5 groups Mar 13 00:34:52.857997 kernel: Dynamic Preempt: voluntary Mar 13 00:34:52.858003 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 13 00:34:52.858011 kernel: rcu: RCU event tracing is enabled. Mar 13 00:34:52.858017 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 13 00:34:52.858022 kernel: Trampoline variant of Tasks RCU enabled. Mar 13 00:34:52.858028 kernel: Rude variant of Tasks RCU enabled. Mar 13 00:34:52.858033 kernel: Tracing variant of Tasks RCU enabled. Mar 13 00:34:52.858039 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 13 00:34:52.858044 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 13 00:34:52.858050 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 13 00:34:52.858055 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 13 00:34:52.858063 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 13 00:34:52.858068 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 13 00:34:52.858074 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 13 00:34:52.858079 kernel: Console: colour dummy device 80x25 Mar 13 00:34:52.858085 kernel: printk: legacy console [tty0] enabled Mar 13 00:34:52.858090 kernel: printk: legacy console [ttyS0] enabled Mar 13 00:34:52.858096 kernel: ACPI: Core revision 20240827 Mar 13 00:34:52.858101 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 13 00:34:52.858107 kernel: APIC: Switch to symmetric I/O mode setup Mar 13 00:34:52.858114 kernel: x2apic enabled Mar 13 00:34:52.858120 kernel: APIC: Switched APIC routing to: physical x2apic Mar 13 00:34:52.858125 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 13 00:34:52.858132 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x22983777dd9, max_idle_ns: 440795300422 ns Mar 13 00:34:52.858155 kernel: Calibrating delay loop (skipped) preset value.. 4800.00 BogoMIPS (lpj=2400000) Mar 13 00:34:52.858191 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 13 00:34:52.858200 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 13 00:34:52.858214 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 13 00:34:52.858225 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 13 00:34:52.858234 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Mar 13 00:34:52.858240 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 13 00:34:52.858246 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 13 00:34:52.858251 kernel: active return thunk: srso_alias_return_thunk Mar 13 00:34:52.858257 kernel: Speculative Return Stack Overflow: Mitigation: Safe RET Mar 13 00:34:52.858262 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 13 00:34:52.858272 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 13 00:34:52.858277 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 13 00:34:52.858285 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 13 00:34:52.858291 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 13 00:34:52.858296 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Mar 13 00:34:52.858302 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Mar 13 00:34:52.858307 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Mar 13 00:34:52.858313 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Mar 13 00:34:52.858318 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 13 00:34:52.858323 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Mar 13 00:34:52.858329 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Mar 13 00:34:52.858336 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Mar 13 00:34:52.858341 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Mar 13 00:34:52.858347 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Mar 13 00:34:52.858352 kernel: Freeing SMP alternatives memory: 32K Mar 13 00:34:52.858358 kernel: pid_max: default: 32768 minimum: 301 Mar 13 00:34:52.858363 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 13 00:34:52.858369 kernel: landlock: Up and running. Mar 13 00:34:52.858374 kernel: SELinux: Initializing. Mar 13 00:34:52.858380 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 13 00:34:52.858387 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 13 00:34:52.858393 kernel: smpboot: CPU0: AMD EPYC-Genoa Processor (family: 0x19, model: 0x11, stepping: 0x0) Mar 13 00:34:52.858398 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Mar 13 00:34:52.858404 kernel: ... version: 0 Mar 13 00:34:52.858409 kernel: ... bit width: 48 Mar 13 00:34:52.858415 kernel: ... generic registers: 6 Mar 13 00:34:52.858420 kernel: ... value mask: 0000ffffffffffff Mar 13 00:34:52.858426 kernel: ... max period: 00007fffffffffff Mar 13 00:34:52.858431 kernel: ... fixed-purpose events: 0 Mar 13 00:34:52.858438 kernel: ... event mask: 000000000000003f Mar 13 00:34:52.858444 kernel: signal: max sigframe size: 3376 Mar 13 00:34:52.858449 kernel: rcu: Hierarchical SRCU implementation. Mar 13 00:34:52.858455 kernel: rcu: Max phase no-delay instances is 400. Mar 13 00:34:52.858461 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 13 00:34:52.858466 kernel: smp: Bringing up secondary CPUs ... Mar 13 00:34:52.858472 kernel: smpboot: x86: Booting SMP configuration: Mar 13 00:34:52.858477 kernel: .... node #0, CPUs: #1 Mar 13 00:34:52.858483 kernel: smp: Brought up 1 node, 2 CPUs Mar 13 00:34:52.858488 kernel: smpboot: Total of 2 processors activated (9600.00 BogoMIPS) Mar 13 00:34:52.858496 kernel: Memory: 3848512K/4091168K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46200K init, 2560K bss, 237024K reserved, 0K cma-reserved) Mar 13 00:34:52.858502 kernel: devtmpfs: initialized Mar 13 00:34:52.858507 kernel: x86/mm: Memory block size: 128MB Mar 13 00:34:52.858513 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7fb7f000-0x7fbfefff] (524288 bytes) Mar 13 00:34:52.858518 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 13 00:34:52.858533 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 13 00:34:52.858538 kernel: pinctrl core: initialized pinctrl subsystem Mar 13 00:34:52.858544 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 13 00:34:52.858551 kernel: audit: initializing netlink subsys (disabled) Mar 13 00:34:52.858557 kernel: audit: type=2000 audit(1773362091.046:1): state=initialized audit_enabled=0 res=1 Mar 13 00:34:52.858562 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 13 00:34:52.858568 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 13 00:34:52.858573 kernel: cpuidle: using governor menu Mar 13 00:34:52.858579 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 13 00:34:52.858584 kernel: dca service started, version 1.12.1 Mar 13 00:34:52.858591 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Mar 13 00:34:52.858596 kernel: PCI: Using configuration type 1 for base access Mar 13 00:34:52.858604 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 13 00:34:52.858609 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 13 00:34:52.858615 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 13 00:34:52.858620 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 13 00:34:52.858626 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 13 00:34:52.858631 kernel: ACPI: Added _OSI(Module Device) Mar 13 00:34:52.858636 kernel: ACPI: Added _OSI(Processor Device) Mar 13 00:34:52.858642 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 13 00:34:52.858648 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 13 00:34:52.858655 kernel: ACPI: Interpreter enabled Mar 13 00:34:52.858660 kernel: ACPI: PM: (supports S0 S5) Mar 13 00:34:52.858666 kernel: ACPI: Using IOAPIC for interrupt routing Mar 13 00:34:52.858671 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 13 00:34:52.858677 kernel: PCI: Using E820 reservations for host bridge windows Mar 13 00:34:52.858682 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 13 00:34:52.858688 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 13 00:34:52.858848 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 13 00:34:52.858954 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 13 00:34:52.859054 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 13 00:34:52.859061 kernel: PCI host bridge to bus 0000:00 Mar 13 00:34:52.859263 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 13 00:34:52.859366 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 13 00:34:52.859458 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 13 00:34:52.859556 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xdfffffff window] Mar 13 00:34:52.859650 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Mar 13 00:34:52.859738 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc7ffffffff window] Mar 13 00:34:52.859826 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 13 00:34:52.859938 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Mar 13 00:34:52.860049 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Mar 13 00:34:52.860187 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80000000-0x807fffff pref] Mar 13 00:34:52.860321 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc060500000-0xc060503fff 64bit pref] Mar 13 00:34:52.860422 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8138a000-0x8138afff] Mar 13 00:34:52.860518 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Mar 13 00:34:52.860626 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 13 00:34:52.860731 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 00:34:52.860828 kernel: pci 0000:00:02.0: BAR 0 [mem 0x81389000-0x81389fff] Mar 13 00:34:52.860923 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Mar 13 00:34:52.861022 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Mar 13 00:34:52.861118 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Mar 13 00:34:52.861303 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 00:34:52.861409 kernel: pci 0000:00:02.1: BAR 0 [mem 0x81388000-0x81388fff] Mar 13 00:34:52.861507 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Mar 13 00:34:52.861616 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Mar 13 00:34:52.861718 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 00:34:52.861819 kernel: pci 0000:00:02.2: BAR 0 [mem 0x81387000-0x81387fff] Mar 13 00:34:52.861915 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Mar 13 00:34:52.862011 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Mar 13 00:34:52.862106 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Mar 13 00:34:52.864282 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 00:34:52.864392 kernel: pci 0000:00:02.3: BAR 0 [mem 0x81386000-0x81386fff] Mar 13 00:34:52.864490 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Mar 13 00:34:52.864603 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Mar 13 00:34:52.864708 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 00:34:52.864805 kernel: pci 0000:00:02.4: BAR 0 [mem 0x81385000-0x81385fff] Mar 13 00:34:52.864900 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Mar 13 00:34:52.864996 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Mar 13 00:34:52.865092 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Mar 13 00:34:52.866263 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 00:34:52.866376 kernel: pci 0000:00:02.5: BAR 0 [mem 0x81384000-0x81384fff] Mar 13 00:34:52.866473 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Mar 13 00:34:52.866580 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Mar 13 00:34:52.866677 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Mar 13 00:34:52.866781 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 00:34:52.866878 kernel: pci 0000:00:02.6: BAR 0 [mem 0x81383000-0x81383fff] Mar 13 00:34:52.866973 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Mar 13 00:34:52.867071 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Mar 13 00:34:52.868360 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Mar 13 00:34:52.868480 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 00:34:52.869282 kernel: pci 0000:00:02.7: BAR 0 [mem 0x81382000-0x81382fff] Mar 13 00:34:52.869390 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Mar 13 00:34:52.869488 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Mar 13 00:34:52.869601 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Mar 13 00:34:52.869706 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 00:34:52.869802 kernel: pci 0000:00:03.0: BAR 0 [mem 0x81381000-0x81381fff] Mar 13 00:34:52.869898 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Mar 13 00:34:52.869993 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Mar 13 00:34:52.870087 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Mar 13 00:34:52.871268 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Mar 13 00:34:52.871382 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 13 00:34:52.871488 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Mar 13 00:34:52.871596 kernel: pci 0000:00:1f.2: BAR 4 [io 0x6040-0x605f] Mar 13 00:34:52.871693 kernel: pci 0000:00:1f.2: BAR 5 [mem 0x81380000-0x81380fff] Mar 13 00:34:52.871794 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Mar 13 00:34:52.871890 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6000-0x603f] Mar 13 00:34:52.871997 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Mar 13 00:34:52.872101 kernel: pci 0000:01:00.0: BAR 1 [mem 0x81200000-0x81200fff] Mar 13 00:34:52.872256 kernel: pci 0000:01:00.0: BAR 4 [mem 0xc060000000-0xc060003fff 64bit pref] Mar 13 00:34:52.872381 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref] Mar 13 00:34:52.872480 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Mar 13 00:34:52.872602 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Mar 13 00:34:52.872703 kernel: pci 0000:02:00.0: BAR 0 [mem 0x81100000-0x81103fff 64bit] Mar 13 00:34:52.872801 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Mar 13 00:34:52.872912 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 PCIe Endpoint Mar 13 00:34:52.873013 kernel: pci 0000:03:00.0: BAR 1 [mem 0x81000000-0x81000fff] Mar 13 00:34:52.873113 kernel: pci 0000:03:00.0: BAR 4 [mem 0xc060100000-0xc060103fff 64bit pref] Mar 13 00:34:52.874645 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Mar 13 00:34:52.874765 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Mar 13 00:34:52.874868 kernel: pci 0000:04:00.0: BAR 4 [mem 0xc060200000-0xc060203fff 64bit pref] Mar 13 00:34:52.874969 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Mar 13 00:34:52.875078 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Mar 13 00:34:52.875239 kernel: pci 0000:05:00.0: BAR 1 [mem 0x80f00000-0x80f00fff] Mar 13 00:34:52.875358 kernel: pci 0000:05:00.0: BAR 4 [mem 0xc060300000-0xc060303fff 64bit pref] Mar 13 00:34:52.875457 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Mar 13 00:34:52.875575 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 PCIe Endpoint Mar 13 00:34:52.875677 kernel: pci 0000:06:00.0: BAR 1 [mem 0x80e00000-0x80e00fff] Mar 13 00:34:52.875781 kernel: pci 0000:06:00.0: BAR 4 [mem 0xc060400000-0xc060403fff 64bit pref] Mar 13 00:34:52.875878 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Mar 13 00:34:52.875886 kernel: acpiphp: Slot [0] registered Mar 13 00:34:52.875992 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Mar 13 00:34:52.877599 kernel: pci 0000:07:00.0: BAR 1 [mem 0x80c00000-0x80c00fff] Mar 13 00:34:52.877712 kernel: pci 0000:07:00.0: BAR 4 [mem 0xc000000000-0xc000003fff 64bit pref] Mar 13 00:34:52.877814 kernel: pci 0000:07:00.0: ROM [mem 0xfff80000-0xffffffff pref] Mar 13 00:34:52.877911 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Mar 13 00:34:52.877923 kernel: acpiphp: Slot [0-2] registered Mar 13 00:34:52.878019 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Mar 13 00:34:52.878026 kernel: acpiphp: Slot [0-3] registered Mar 13 00:34:52.878121 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Mar 13 00:34:52.878132 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 13 00:34:52.878164 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 13 00:34:52.878175 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 13 00:34:52.878189 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 13 00:34:52.878203 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 13 00:34:52.878217 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 13 00:34:52.878230 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 13 00:34:52.878237 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 13 00:34:52.878243 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 13 00:34:52.878249 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 13 00:34:52.878255 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 13 00:34:52.878261 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 13 00:34:52.878270 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 13 00:34:52.878275 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 13 00:34:52.878281 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 13 00:34:52.878289 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 13 00:34:52.878295 kernel: iommu: Default domain type: Translated Mar 13 00:34:52.878301 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 13 00:34:52.878309 kernel: efivars: Registered efivars operations Mar 13 00:34:52.878314 kernel: PCI: Using ACPI for IRQ routing Mar 13 00:34:52.878320 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 13 00:34:52.878326 kernel: e820: reserve RAM buffer [mem 0x7dc01018-0x7fffffff] Mar 13 00:34:52.878331 kernel: e820: reserve RAM buffer [mem 0x7df6f018-0x7fffffff] Mar 13 00:34:52.878337 kernel: e820: reserve RAM buffer [mem 0x7dfab018-0x7fffffff] Mar 13 00:34:52.878343 kernel: e820: reserve RAM buffer [mem 0x7ed3f000-0x7fffffff] Mar 13 00:34:52.878348 kernel: e820: reserve RAM buffer [mem 0x7f8ed000-0x7fffffff] Mar 13 00:34:52.878354 kernel: e820: reserve RAM buffer [mem 0x7ff7c000-0x7fffffff] Mar 13 00:34:52.878362 kernel: e820: reserve RAM buffer [mem 0x17a000000-0x17bffffff] Mar 13 00:34:52.878479 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 13 00:34:52.878587 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 13 00:34:52.878684 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 13 00:34:52.878691 kernel: vgaarb: loaded Mar 13 00:34:52.878697 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 13 00:34:52.878703 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 13 00:34:52.878709 kernel: clocksource: Switched to clocksource kvm-clock Mar 13 00:34:52.878715 kernel: VFS: Disk quotas dquot_6.6.0 Mar 13 00:34:52.878723 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 13 00:34:52.878729 kernel: pnp: PnP ACPI init Mar 13 00:34:52.878833 kernel: system 00:04: [mem 0xe0000000-0xefffffff window] has been reserved Mar 13 00:34:52.878841 kernel: pnp: PnP ACPI: found 5 devices Mar 13 00:34:52.878847 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 13 00:34:52.878852 kernel: NET: Registered PF_INET protocol family Mar 13 00:34:52.878858 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 13 00:34:52.878864 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 13 00:34:52.878872 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 13 00:34:52.878878 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 13 00:34:52.878883 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 13 00:34:52.878889 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 13 00:34:52.878895 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 13 00:34:52.878901 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 13 00:34:52.878910 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 13 00:34:52.878915 kernel: NET: Registered PF_XDP protocol family Mar 13 00:34:52.879021 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref]: can't claim; no compatible bridge window Mar 13 00:34:52.879133 kernel: pci 0000:07:00.0: ROM [mem 0xfff80000-0xffffffff pref]: can't claim; no compatible bridge window Mar 13 00:34:52.880811 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Mar 13 00:34:52.880921 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Mar 13 00:34:52.881030 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Mar 13 00:34:52.881175 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff]: assigned Mar 13 00:34:52.881303 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff]: assigned Mar 13 00:34:52.881404 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff]: assigned Mar 13 00:34:52.881510 kernel: pci 0000:01:00.0: ROM [mem 0x81280000-0x812fffff pref]: assigned Mar 13 00:34:52.881619 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Mar 13 00:34:52.881717 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Mar 13 00:34:52.881812 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Mar 13 00:34:52.881908 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Mar 13 00:34:52.882004 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Mar 13 00:34:52.882100 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Mar 13 00:34:52.885275 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Mar 13 00:34:52.885384 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Mar 13 00:34:52.885488 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Mar 13 00:34:52.885597 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Mar 13 00:34:52.885694 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Mar 13 00:34:52.885791 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Mar 13 00:34:52.885887 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Mar 13 00:34:52.885983 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Mar 13 00:34:52.886081 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Mar 13 00:34:52.886231 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Mar 13 00:34:52.886341 kernel: pci 0000:07:00.0: ROM [mem 0x80c80000-0x80cfffff pref]: assigned Mar 13 00:34:52.887044 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Mar 13 00:34:52.887218 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Mar 13 00:34:52.887331 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Mar 13 00:34:52.887429 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Mar 13 00:34:52.887536 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Mar 13 00:34:52.887638 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Mar 13 00:34:52.887733 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Mar 13 00:34:52.887828 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Mar 13 00:34:52.887924 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Mar 13 00:34:52.888020 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Mar 13 00:34:52.888116 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Mar 13 00:34:52.892055 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Mar 13 00:34:52.892192 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 13 00:34:52.892324 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 13 00:34:52.892425 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 13 00:34:52.892518 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xdfffffff window] Mar 13 00:34:52.892621 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Mar 13 00:34:52.892712 kernel: pci_bus 0000:00: resource 9 [mem 0xc000000000-0xc7ffffffff window] Mar 13 00:34:52.892815 kernel: pci_bus 0000:01: resource 1 [mem 0x81200000-0x812fffff] Mar 13 00:34:52.892911 kernel: pci_bus 0000:01: resource 2 [mem 0xc060000000-0xc0600fffff 64bit pref] Mar 13 00:34:52.893014 kernel: pci_bus 0000:02: resource 1 [mem 0x81100000-0x811fffff] Mar 13 00:34:52.893118 kernel: pci_bus 0000:03: resource 1 [mem 0x81000000-0x810fffff] Mar 13 00:34:52.893262 kernel: pci_bus 0000:03: resource 2 [mem 0xc060100000-0xc0601fffff 64bit pref] Mar 13 00:34:52.893375 kernel: pci_bus 0000:04: resource 2 [mem 0xc060200000-0xc0602fffff 64bit pref] Mar 13 00:34:52.893477 kernel: pci_bus 0000:05: resource 1 [mem 0x80f00000-0x80ffffff] Mar 13 00:34:52.893584 kernel: pci_bus 0000:05: resource 2 [mem 0xc060300000-0xc0603fffff 64bit pref] Mar 13 00:34:52.893684 kernel: pci_bus 0000:06: resource 1 [mem 0x80e00000-0x80efffff] Mar 13 00:34:52.893781 kernel: pci_bus 0000:06: resource 2 [mem 0xc060400000-0xc0604fffff 64bit pref] Mar 13 00:34:52.893880 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Mar 13 00:34:52.893977 kernel: pci_bus 0000:07: resource 1 [mem 0x80c00000-0x80dfffff] Mar 13 00:34:52.894070 kernel: pci_bus 0000:07: resource 2 [mem 0xc000000000-0xc01fffffff 64bit pref] Mar 13 00:34:52.894266 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Mar 13 00:34:52.894373 kernel: pci_bus 0000:08: resource 1 [mem 0x80a00000-0x80bfffff] Mar 13 00:34:52.894468 kernel: pci_bus 0000:08: resource 2 [mem 0xc020000000-0xc03fffffff 64bit pref] Mar 13 00:34:52.894585 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Mar 13 00:34:52.894680 kernel: pci_bus 0000:09: resource 1 [mem 0x80800000-0x809fffff] Mar 13 00:34:52.894775 kernel: pci_bus 0000:09: resource 2 [mem 0xc040000000-0xc05fffffff 64bit pref] Mar 13 00:34:52.894783 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 13 00:34:52.894789 kernel: PCI: CLS 0 bytes, default 64 Mar 13 00:34:52.894795 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 13 00:34:52.894801 kernel: software IO TLB: mapped [mem 0x0000000077ffd000-0x000000007bffd000] (64MB) Mar 13 00:34:52.894810 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x22983777dd9, max_idle_ns: 440795300422 ns Mar 13 00:34:52.894816 kernel: Initialise system trusted keyrings Mar 13 00:34:52.894822 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 13 00:34:52.894828 kernel: Key type asymmetric registered Mar 13 00:34:52.894833 kernel: Asymmetric key parser 'x509' registered Mar 13 00:34:52.894839 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 13 00:34:52.894845 kernel: io scheduler mq-deadline registered Mar 13 00:34:52.894851 kernel: io scheduler kyber registered Mar 13 00:34:52.894857 kernel: io scheduler bfq registered Mar 13 00:34:52.894960 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Mar 13 00:34:52.895060 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Mar 13 00:34:52.895209 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Mar 13 00:34:52.895317 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Mar 13 00:34:52.895416 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Mar 13 00:34:52.895513 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Mar 13 00:34:52.895623 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Mar 13 00:34:52.895720 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Mar 13 00:34:52.895817 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Mar 13 00:34:52.895918 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Mar 13 00:34:52.896014 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Mar 13 00:34:52.896109 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Mar 13 00:34:52.896790 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Mar 13 00:34:52.896898 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Mar 13 00:34:52.896996 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Mar 13 00:34:52.897092 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Mar 13 00:34:52.897104 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 13 00:34:52.897259 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Mar 13 00:34:52.897363 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Mar 13 00:34:52.897371 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 13 00:34:52.897377 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Mar 13 00:34:52.897383 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 13 00:34:52.897389 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 13 00:34:52.897399 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 13 00:34:52.897405 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 13 00:34:52.897411 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 13 00:34:52.897512 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 13 00:34:52.897520 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 13 00:34:52.897621 kernel: rtc_cmos 00:03: registered as rtc0 Mar 13 00:34:52.897713 kernel: rtc_cmos 00:03: setting system clock to 2026-03-13T00:34:52 UTC (1773362092) Mar 13 00:34:52.897804 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 13 00:34:52.897815 kernel: amd_pstate: The CPPC feature is supported but currently disabled by the BIOS. Please enable it if your BIOS has the CPPC option. Mar 13 00:34:52.897821 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 13 00:34:52.897827 kernel: efifb: probing for efifb Mar 13 00:34:52.897833 kernel: efifb: framebuffer at 0x80000000, using 4000k, total 4000k Mar 13 00:34:52.897838 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Mar 13 00:34:52.897844 kernel: efifb: scrolling: redraw Mar 13 00:34:52.897850 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 13 00:34:52.897856 kernel: Console: switching to colour frame buffer device 160x50 Mar 13 00:34:52.897864 kernel: fb0: EFI VGA frame buffer device Mar 13 00:34:52.897870 kernel: pstore: Using crash dump compression: deflate Mar 13 00:34:52.897875 kernel: pstore: Registered efi_pstore as persistent store backend Mar 13 00:34:52.897881 kernel: NET: Registered PF_INET6 protocol family Mar 13 00:34:52.897887 kernel: Segment Routing with IPv6 Mar 13 00:34:52.897893 kernel: In-situ OAM (IOAM) with IPv6 Mar 13 00:34:52.897899 kernel: NET: Registered PF_PACKET protocol family Mar 13 00:34:52.897905 kernel: Key type dns_resolver registered Mar 13 00:34:52.897911 kernel: IPI shorthand broadcast: enabled Mar 13 00:34:52.897920 kernel: sched_clock: Marking stable (2829015200, 268233020)->(3186840810, -89592590) Mar 13 00:34:52.897926 kernel: registered taskstats version 1 Mar 13 00:34:52.897932 kernel: Loading compiled-in X.509 certificates Mar 13 00:34:52.897938 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: 5aff49df330f42445474818d085d5033fee752d8' Mar 13 00:34:52.897943 kernel: Demotion targets for Node 0: null Mar 13 00:34:52.897949 kernel: Key type .fscrypt registered Mar 13 00:34:52.897955 kernel: Key type fscrypt-provisioning registered Mar 13 00:34:52.897960 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 13 00:34:52.897966 kernel: ima: Allocated hash algorithm: sha1 Mar 13 00:34:52.897974 kernel: ima: No architecture policies found Mar 13 00:34:52.897979 kernel: clk: Disabling unused clocks Mar 13 00:34:52.897985 kernel: Warning: unable to open an initial console. Mar 13 00:34:52.897991 kernel: Freeing unused kernel image (initmem) memory: 46200K Mar 13 00:34:52.897997 kernel: Write protecting the kernel read-only data: 40960k Mar 13 00:34:52.898003 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Mar 13 00:34:52.898009 kernel: Run /init as init process Mar 13 00:34:52.898014 kernel: with arguments: Mar 13 00:34:52.898021 kernel: /init Mar 13 00:34:52.898029 kernel: with environment: Mar 13 00:34:52.898034 kernel: HOME=/ Mar 13 00:34:52.898040 kernel: TERM=linux Mar 13 00:34:52.898047 systemd[1]: Successfully made /usr/ read-only. Mar 13 00:34:52.898055 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 13 00:34:52.898062 systemd[1]: Detected virtualization kvm. Mar 13 00:34:52.898068 systemd[1]: Detected architecture x86-64. Mar 13 00:34:52.898076 systemd[1]: Running in initrd. Mar 13 00:34:52.898082 systemd[1]: No hostname configured, using default hostname. Mar 13 00:34:52.898088 systemd[1]: Hostname set to . Mar 13 00:34:52.898094 systemd[1]: Initializing machine ID from VM UUID. Mar 13 00:34:52.898100 systemd[1]: Queued start job for default target initrd.target. Mar 13 00:34:52.898106 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:34:52.898112 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:34:52.898120 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 13 00:34:52.898130 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 13 00:34:52.898192 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 13 00:34:52.898202 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 13 00:34:52.898210 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 13 00:34:52.898216 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 13 00:34:52.898223 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:34:52.898228 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:34:52.898238 systemd[1]: Reached target paths.target - Path Units. Mar 13 00:34:52.898245 systemd[1]: Reached target slices.target - Slice Units. Mar 13 00:34:52.898250 systemd[1]: Reached target swap.target - Swaps. Mar 13 00:34:52.898257 systemd[1]: Reached target timers.target - Timer Units. Mar 13 00:34:52.898263 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 13 00:34:52.898269 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 13 00:34:52.898275 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 13 00:34:52.898281 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 13 00:34:52.898287 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:34:52.898295 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 13 00:34:52.898301 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:34:52.898307 systemd[1]: Reached target sockets.target - Socket Units. Mar 13 00:34:52.898313 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 13 00:34:52.898319 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 13 00:34:52.898325 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 13 00:34:52.898332 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 13 00:34:52.898338 systemd[1]: Starting systemd-fsck-usr.service... Mar 13 00:34:52.898346 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 13 00:34:52.898352 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 13 00:34:52.898358 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:34:52.898364 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 13 00:34:52.898371 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:34:52.898402 systemd-journald[197]: Collecting audit messages is disabled. Mar 13 00:34:52.898418 systemd[1]: Finished systemd-fsck-usr.service. Mar 13 00:34:52.898425 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 13 00:34:52.898431 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 13 00:34:52.898440 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:34:52.898445 kernel: Bridge firewalling registered Mar 13 00:34:52.898452 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 13 00:34:52.898459 systemd-journald[197]: Journal started Mar 13 00:34:52.898473 systemd-journald[197]: Runtime Journal (/run/log/journal/6a54f19cc7264270a6ab022a40ad45b4) is 8M, max 76.1M, 68.1M free. Mar 13 00:34:52.852640 systemd-modules-load[198]: Inserted module 'overlay' Mar 13 00:34:52.886774 systemd-modules-load[198]: Inserted module 'br_netfilter' Mar 13 00:34:52.902348 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 13 00:34:52.906196 systemd[1]: Started systemd-journald.service - Journal Service. Mar 13 00:34:52.905991 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 13 00:34:52.911425 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 00:34:52.913068 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 13 00:34:52.933586 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 13 00:34:52.936980 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 00:34:52.946819 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 13 00:34:52.949036 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:34:52.949654 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:34:52.950596 systemd-tmpfiles[228]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 13 00:34:52.956461 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:34:52.959494 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 13 00:34:52.968224 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:34:53.000597 systemd-resolved[237]: Positive Trust Anchors: Mar 13 00:34:53.001109 systemd-resolved[237]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 13 00:34:53.001131 systemd-resolved[237]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 13 00:34:53.003451 systemd-resolved[237]: Defaulting to hostname 'linux'. Mar 13 00:34:53.005775 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 13 00:34:53.006211 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:34:53.047178 kernel: SCSI subsystem initialized Mar 13 00:34:53.054160 kernel: Loading iSCSI transport class v2.0-870. Mar 13 00:34:53.063167 kernel: iscsi: registered transport (tcp) Mar 13 00:34:53.079862 kernel: iscsi: registered transport (qla4xxx) Mar 13 00:34:53.079898 kernel: QLogic iSCSI HBA Driver Mar 13 00:34:53.095342 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 13 00:34:53.107905 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:34:53.109333 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 13 00:34:53.154105 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 13 00:34:53.155612 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 13 00:34:53.209180 kernel: raid6: avx512x4 gen() 37044 MB/s Mar 13 00:34:53.227188 kernel: raid6: avx512x2 gen() 36345 MB/s Mar 13 00:34:53.245212 kernel: raid6: avx512x1 gen() 32652 MB/s Mar 13 00:34:53.263176 kernel: raid6: avx2x4 gen() 39607 MB/s Mar 13 00:34:53.281173 kernel: raid6: avx2x2 gen() 49390 MB/s Mar 13 00:34:53.300260 kernel: raid6: avx2x1 gen() 38576 MB/s Mar 13 00:34:53.300315 kernel: raid6: using algorithm avx2x2 gen() 49390 MB/s Mar 13 00:34:53.320391 kernel: raid6: .... xor() 36916 MB/s, rmw enabled Mar 13 00:34:53.320454 kernel: raid6: using avx512x2 recovery algorithm Mar 13 00:34:53.337176 kernel: xor: automatically using best checksumming function avx Mar 13 00:34:53.453173 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 13 00:34:53.463586 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 13 00:34:53.465274 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:34:53.494640 systemd-udevd[447]: Using default interface naming scheme 'v255'. Mar 13 00:34:53.501177 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:34:53.503683 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 13 00:34:53.533617 dracut-pre-trigger[452]: rd.md=0: removing MD RAID activation Mar 13 00:34:53.555340 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 13 00:34:53.556687 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 13 00:34:53.630385 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:34:53.632249 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 13 00:34:53.725165 kernel: cryptd: max_cpu_qlen set to 1000 Mar 13 00:34:53.730312 kernel: virtio_scsi virtio5: 2/0/0 default/read/poll queues Mar 13 00:34:53.747185 kernel: AES CTR mode by8 optimization enabled Mar 13 00:34:53.754165 kernel: scsi host0: Virtio SCSI HBA Mar 13 00:34:53.769223 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Mar 13 00:34:53.802422 kernel: ACPI: bus type USB registered Mar 13 00:34:53.802643 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 13 00:34:53.802696 kernel: usbcore: registered new interface driver usbfs Mar 13 00:34:53.770332 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:34:53.809301 kernel: usbcore: registered new interface driver hub Mar 13 00:34:53.809357 kernel: usbcore: registered new device driver usb Mar 13 00:34:53.809369 kernel: libata version 3.00 loaded. Mar 13 00:34:53.801754 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:34:53.803368 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:34:53.810254 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:34:53.828965 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:34:53.830335 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:34:53.834221 kernel: sd 0:0:0:0: Power-on or device reset occurred Mar 13 00:34:53.837764 kernel: sd 0:0:0:0: [sda] 160006144 512-byte logical blocks: (81.9 GB/76.3 GiB) Mar 13 00:34:53.837446 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:34:53.841881 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 13 00:34:53.842035 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Mar 13 00:34:53.841637 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:34:53.846805 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 13 00:34:53.860628 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 13 00:34:53.860677 kernel: GPT:17805311 != 160006143 Mar 13 00:34:53.860686 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 13 00:34:53.860695 kernel: GPT:17805311 != 160006143 Mar 13 00:34:53.860703 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 13 00:34:53.861025 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 13 00:34:53.861353 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 13 00:34:53.870492 kernel: ahci 0000:00:1f.2: version 3.0 Mar 13 00:34:53.870700 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 13 00:34:53.878465 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 13 00:34:53.878667 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Mar 13 00:34:53.878789 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Mar 13 00:34:53.878906 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Mar 13 00:34:53.879015 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 13 00:34:53.884539 kernel: scsi host1: ahci Mar 13 00:34:53.884587 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Mar 13 00:34:53.889012 kernel: scsi host2: ahci Mar 13 00:34:53.889055 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 13 00:34:53.890318 kernel: scsi host3: ahci Mar 13 00:34:53.890353 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Mar 13 00:34:53.892200 kernel: scsi host4: ahci Mar 13 00:34:53.892233 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Mar 13 00:34:53.895490 kernel: scsi host5: ahci Mar 13 00:34:53.895543 kernel: hub 1-0:1.0: USB hub found Mar 13 00:34:53.896789 kernel: scsi host6: ahci Mar 13 00:34:53.896818 kernel: hub 1-0:1.0: 4 ports detected Mar 13 00:34:53.904085 kernel: ata1: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380100 irq 48 lpm-pol 1 Mar 13 00:34:53.904126 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Mar 13 00:34:53.904329 kernel: ata2: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380180 irq 48 lpm-pol 1 Mar 13 00:34:53.904339 kernel: hub 2-0:1.0: USB hub found Mar 13 00:34:53.904473 kernel: ata3: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380200 irq 48 lpm-pol 1 Mar 13 00:34:53.904482 kernel: hub 2-0:1.0: 4 ports detected Mar 13 00:34:53.904612 kernel: ata4: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380280 irq 48 lpm-pol 1 Mar 13 00:34:53.930263 kernel: ata5: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380300 irq 48 lpm-pol 1 Mar 13 00:34:53.930306 kernel: ata6: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380380 irq 48 lpm-pol 1 Mar 13 00:34:53.935732 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:34:53.946871 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Mar 13 00:34:53.954371 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Mar 13 00:34:53.970102 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 13 00:34:53.975931 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Mar 13 00:34:53.976687 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Mar 13 00:34:53.978678 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 13 00:34:53.992518 disk-uuid[655]: Primary Header is updated. Mar 13 00:34:53.992518 disk-uuid[655]: Secondary Entries is updated. Mar 13 00:34:53.992518 disk-uuid[655]: Secondary Header is updated. Mar 13 00:34:54.008164 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 13 00:34:54.146173 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Mar 13 00:34:54.244823 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 13 00:34:54.244922 kernel: ata1.00: LPM support broken, forcing max_power Mar 13 00:34:54.244946 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 13 00:34:54.249852 kernel: ata1.00: applying bridge limits Mar 13 00:34:54.257888 kernel: ata3: SATA link down (SStatus 0 SControl 300) Mar 13 00:34:54.257946 kernel: ata1.00: LPM support broken, forcing max_power Mar 13 00:34:54.262709 kernel: ata1.00: configured for UDMA/100 Mar 13 00:34:54.270543 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 13 00:34:54.275181 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 13 00:34:54.275298 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 13 00:34:54.288179 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 13 00:34:54.293184 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 13 00:34:54.299165 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 13 00:34:54.310468 kernel: usbcore: registered new interface driver usbhid Mar 13 00:34:54.310501 kernel: usbhid: USB HID core driver Mar 13 00:34:54.317570 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input4 Mar 13 00:34:54.317636 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Mar 13 00:34:54.331064 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 13 00:34:54.331298 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 13 00:34:54.345224 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Mar 13 00:34:54.700292 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 13 00:34:54.703064 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 13 00:34:54.704029 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:34:54.705756 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 13 00:34:54.708978 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 13 00:34:54.761833 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 13 00:34:55.031840 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 13 00:34:55.033497 disk-uuid[656]: The operation has completed successfully. Mar 13 00:34:55.105915 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 13 00:34:55.106056 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 13 00:34:55.133427 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 13 00:34:55.164357 sh[689]: Success Mar 13 00:34:55.196307 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 13 00:34:55.196371 kernel: device-mapper: uevent: version 1.0.3 Mar 13 00:34:55.200111 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 13 00:34:55.214271 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Mar 13 00:34:55.270212 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 13 00:34:55.271684 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 13 00:34:55.278586 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 13 00:34:55.290267 kernel: BTRFS: device fsid 503642f8-c59c-4168-97a8-9c3603183fa3 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (701) Mar 13 00:34:55.293402 kernel: BTRFS info (device dm-0): first mount of filesystem 503642f8-c59c-4168-97a8-9c3603183fa3 Mar 13 00:34:55.293453 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:34:55.305009 kernel: BTRFS info (device dm-0 state E): enabling ssd optimizations Mar 13 00:34:55.305044 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 13 00:34:55.305054 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 13 00:34:55.308445 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 13 00:34:55.309264 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 13 00:34:55.309725 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 13 00:34:55.310373 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 13 00:34:55.312778 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 13 00:34:55.337176 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (732) Mar 13 00:34:55.342384 kernel: BTRFS info (device sda6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:34:55.342408 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:34:55.350356 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 13 00:34:55.350380 kernel: BTRFS info (device sda6): turning on async discard Mar 13 00:34:55.350390 kernel: BTRFS info (device sda6): enabling free space tree Mar 13 00:34:55.359193 kernel: BTRFS info (device sda6): last unmount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:34:55.360589 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 13 00:34:55.363265 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 13 00:34:55.449514 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 13 00:34:55.453511 ignition[795]: Ignition 2.22.0 Mar 13 00:34:55.454059 ignition[795]: Stage: fetch-offline Mar 13 00:34:55.454498 ignition[795]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:34:55.454311 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 13 00:34:55.454507 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 13 00:34:55.454583 ignition[795]: parsed url from cmdline: "" Mar 13 00:34:55.454587 ignition[795]: no config URL provided Mar 13 00:34:55.454592 ignition[795]: reading system config file "/usr/lib/ignition/user.ign" Mar 13 00:34:55.457831 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 13 00:34:55.454599 ignition[795]: no config at "/usr/lib/ignition/user.ign" Mar 13 00:34:55.454604 ignition[795]: failed to fetch config: resource requires networking Mar 13 00:34:55.454715 ignition[795]: Ignition finished successfully Mar 13 00:34:55.483620 systemd-networkd[874]: lo: Link UP Mar 13 00:34:55.483629 systemd-networkd[874]: lo: Gained carrier Mar 13 00:34:55.485902 systemd-networkd[874]: Enumeration completed Mar 13 00:34:55.486069 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 13 00:34:55.486577 systemd[1]: Reached target network.target - Network. Mar 13 00:34:55.487468 systemd-networkd[874]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:34:55.487472 systemd-networkd[874]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 00:34:55.487866 systemd-networkd[874]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:34:55.487870 systemd-networkd[874]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 00:34:55.488349 systemd-networkd[874]: eth0: Link UP Mar 13 00:34:55.488485 systemd-networkd[874]: eth1: Link UP Mar 13 00:34:55.489656 systemd-networkd[874]: eth0: Gained carrier Mar 13 00:34:55.489665 systemd-networkd[874]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:34:55.489965 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 13 00:34:55.494061 systemd-networkd[874]: eth1: Gained carrier Mar 13 00:34:55.494071 systemd-networkd[874]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:34:55.517765 ignition[878]: Ignition 2.22.0 Mar 13 00:34:55.517777 ignition[878]: Stage: fetch Mar 13 00:34:55.517883 ignition[878]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:34:55.517892 ignition[878]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 13 00:34:55.517950 ignition[878]: parsed url from cmdline: "" Mar 13 00:34:55.517953 ignition[878]: no config URL provided Mar 13 00:34:55.517958 ignition[878]: reading system config file "/usr/lib/ignition/user.ign" Mar 13 00:34:55.517965 ignition[878]: no config at "/usr/lib/ignition/user.ign" Mar 13 00:34:55.517986 ignition[878]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Mar 13 00:34:55.518120 ignition[878]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 13 00:34:55.530197 systemd-networkd[874]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Mar 13 00:34:55.550184 systemd-networkd[874]: eth0: DHCPv4 address 89.167.5.55/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 13 00:34:55.718393 ignition[878]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Mar 13 00:34:55.727621 ignition[878]: GET result: OK Mar 13 00:34:55.727713 ignition[878]: parsing config with SHA512: df76087c41575aaec4f34bb0f79227c7f54b2303d51d5376a48f9aa1d838eeda4c2df9a90ca04e839f87f3de502fa4a6c1528e2d393626b206238da02902a293 Mar 13 00:34:55.734281 unknown[878]: fetched base config from "system" Mar 13 00:34:55.734304 unknown[878]: fetched base config from "system" Mar 13 00:34:55.734759 ignition[878]: fetch: fetch complete Mar 13 00:34:55.734315 unknown[878]: fetched user config from "hetzner" Mar 13 00:34:55.734770 ignition[878]: fetch: fetch passed Mar 13 00:34:55.734846 ignition[878]: Ignition finished successfully Mar 13 00:34:55.740301 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 13 00:34:55.743668 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 13 00:34:55.797714 ignition[886]: Ignition 2.22.0 Mar 13 00:34:55.797733 ignition[886]: Stage: kargs Mar 13 00:34:55.797916 ignition[886]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:34:55.797933 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 13 00:34:55.798835 ignition[886]: kargs: kargs passed Mar 13 00:34:55.802632 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 13 00:34:55.798906 ignition[886]: Ignition finished successfully Mar 13 00:34:55.806626 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 13 00:34:55.850878 ignition[893]: Ignition 2.22.0 Mar 13 00:34:55.850898 ignition[893]: Stage: disks Mar 13 00:34:55.851066 ignition[893]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:34:55.851082 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 13 00:34:55.856680 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 13 00:34:55.852201 ignition[893]: disks: disks passed Mar 13 00:34:55.859020 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 13 00:34:55.852260 ignition[893]: Ignition finished successfully Mar 13 00:34:55.860104 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 13 00:34:55.862059 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 13 00:34:55.863859 systemd[1]: Reached target sysinit.target - System Initialization. Mar 13 00:34:55.865563 systemd[1]: Reached target basic.target - Basic System. Mar 13 00:34:55.868641 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 13 00:34:55.903312 systemd-fsck[902]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Mar 13 00:34:55.908499 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 13 00:34:55.913031 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 13 00:34:56.018166 kernel: EXT4-fs (sda9): mounted filesystem 26348f72-0225-4c06-aedc-823e61beebc6 r/w with ordered data mode. Quota mode: none. Mar 13 00:34:56.019907 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 13 00:34:56.021491 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 13 00:34:56.024474 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 13 00:34:56.027686 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 13 00:34:56.034399 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 13 00:34:56.036136 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 13 00:34:56.036236 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 13 00:34:56.039559 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 13 00:34:56.043250 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 13 00:34:56.051340 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (910) Mar 13 00:34:56.064561 kernel: BTRFS info (device sda6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:34:56.064635 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:34:56.085901 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 13 00:34:56.085966 kernel: BTRFS info (device sda6): turning on async discard Mar 13 00:34:56.085989 kernel: BTRFS info (device sda6): enabling free space tree Mar 13 00:34:56.089111 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 13 00:34:56.109591 coreos-metadata[912]: Mar 13 00:34:56.109 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Mar 13 00:34:56.110808 coreos-metadata[912]: Mar 13 00:34:56.110 INFO Fetch successful Mar 13 00:34:56.111847 coreos-metadata[912]: Mar 13 00:34:56.111 INFO wrote hostname ci-4459-2-4-n-a4844b4806 to /sysroot/etc/hostname Mar 13 00:34:56.113601 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 13 00:34:56.121783 initrd-setup-root[938]: cut: /sysroot/etc/passwd: No such file or directory Mar 13 00:34:56.127182 initrd-setup-root[945]: cut: /sysroot/etc/group: No such file or directory Mar 13 00:34:56.131060 initrd-setup-root[952]: cut: /sysroot/etc/shadow: No such file or directory Mar 13 00:34:56.135192 initrd-setup-root[959]: cut: /sysroot/etc/gshadow: No such file or directory Mar 13 00:34:56.221842 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 13 00:34:56.223207 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 13 00:34:56.224458 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 13 00:34:56.246174 kernel: BTRFS info (device sda6): last unmount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:34:56.257272 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 13 00:34:56.270494 ignition[1028]: INFO : Ignition 2.22.0 Mar 13 00:34:56.270494 ignition[1028]: INFO : Stage: mount Mar 13 00:34:56.271435 ignition[1028]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:34:56.271435 ignition[1028]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 13 00:34:56.271435 ignition[1028]: INFO : mount: mount passed Mar 13 00:34:56.271435 ignition[1028]: INFO : Ignition finished successfully Mar 13 00:34:56.272838 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 13 00:34:56.273988 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 13 00:34:56.287972 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 13 00:34:56.294495 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 13 00:34:56.314180 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1038) Mar 13 00:34:56.314211 kernel: BTRFS info (device sda6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:34:56.318691 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:34:56.325249 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 13 00:34:56.325275 kernel: BTRFS info (device sda6): turning on async discard Mar 13 00:34:56.325285 kernel: BTRFS info (device sda6): enabling free space tree Mar 13 00:34:56.328755 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 13 00:34:56.351967 ignition[1054]: INFO : Ignition 2.22.0 Mar 13 00:34:56.351967 ignition[1054]: INFO : Stage: files Mar 13 00:34:56.352778 ignition[1054]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:34:56.352778 ignition[1054]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 13 00:34:56.352778 ignition[1054]: DEBUG : files: compiled without relabeling support, skipping Mar 13 00:34:56.353747 ignition[1054]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 13 00:34:56.353747 ignition[1054]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 13 00:34:56.355221 ignition[1054]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 13 00:34:56.355687 ignition[1054]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 13 00:34:56.356376 unknown[1054]: wrote ssh authorized keys file for user: core Mar 13 00:34:56.356886 ignition[1054]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 13 00:34:56.359107 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 13 00:34:56.359107 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 13 00:34:56.620471 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 13 00:34:56.930359 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 13 00:34:56.930359 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 13 00:34:56.933006 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 13 00:34:57.204426 systemd-networkd[874]: eth1: Gained IPv6LL Mar 13 00:34:57.217486 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 13 00:34:57.268851 systemd-networkd[874]: eth0: Gained IPv6LL Mar 13 00:34:57.334174 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 13 00:34:57.335337 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 13 00:34:57.335337 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 13 00:34:57.335337 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 13 00:34:57.335337 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 13 00:34:57.335337 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 13 00:34:57.335337 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 13 00:34:57.335337 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 13 00:34:57.335337 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 13 00:34:57.342428 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 13 00:34:57.342428 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 13 00:34:57.342428 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 13 00:34:57.342428 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 13 00:34:57.342428 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 13 00:34:57.342428 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 13 00:34:57.679641 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 13 00:34:57.945684 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 13 00:34:57.945684 ignition[1054]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 13 00:34:57.948390 ignition[1054]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 13 00:34:57.951271 ignition[1054]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 13 00:34:57.951271 ignition[1054]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 13 00:34:57.951271 ignition[1054]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 13 00:34:57.953149 ignition[1054]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 13 00:34:57.953149 ignition[1054]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 13 00:34:57.953149 ignition[1054]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 13 00:34:57.953149 ignition[1054]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Mar 13 00:34:57.953149 ignition[1054]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Mar 13 00:34:57.953149 ignition[1054]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 13 00:34:57.953149 ignition[1054]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 13 00:34:57.953149 ignition[1054]: INFO : files: files passed Mar 13 00:34:57.953149 ignition[1054]: INFO : Ignition finished successfully Mar 13 00:34:57.954826 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 13 00:34:57.956651 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 13 00:34:57.960245 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 13 00:34:57.977851 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 13 00:34:57.977977 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 13 00:34:57.986363 initrd-setup-root-after-ignition[1085]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:34:57.986363 initrd-setup-root-after-ignition[1085]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:34:57.987697 initrd-setup-root-after-ignition[1089]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:34:57.988905 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 13 00:34:57.989948 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 13 00:34:57.991502 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 13 00:34:58.034901 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 13 00:34:58.035000 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 13 00:34:58.035930 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 13 00:34:58.036574 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 13 00:34:58.037444 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 13 00:34:58.039254 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 13 00:34:58.076338 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 13 00:34:58.078885 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 13 00:34:58.097313 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:34:58.098698 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:34:58.099964 systemd[1]: Stopped target timers.target - Timer Units. Mar 13 00:34:58.101196 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 13 00:34:58.101312 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 13 00:34:58.102471 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 13 00:34:58.103477 systemd[1]: Stopped target basic.target - Basic System. Mar 13 00:34:58.104401 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 13 00:34:58.105329 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 13 00:34:58.106230 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 13 00:34:58.107129 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 13 00:34:58.108081 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 13 00:34:58.108990 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 13 00:34:58.109895 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 13 00:34:58.110864 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 13 00:34:58.111788 systemd[1]: Stopped target swap.target - Swaps. Mar 13 00:34:58.112709 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 13 00:34:58.112842 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 13 00:34:58.114197 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:34:58.115202 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:34:58.116025 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 13 00:34:58.116124 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:34:58.116951 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 13 00:34:58.117073 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 13 00:34:58.118243 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 13 00:34:58.118350 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 13 00:34:58.119220 systemd[1]: ignition-files.service: Deactivated successfully. Mar 13 00:34:58.119358 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 13 00:34:58.120083 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 13 00:34:58.120195 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 13 00:34:58.123228 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 13 00:34:58.123775 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 13 00:34:58.123914 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:34:58.127286 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 13 00:34:58.127886 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 13 00:34:58.128021 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:34:58.129288 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 13 00:34:58.129414 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 13 00:34:58.138267 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 13 00:34:58.138382 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 13 00:34:58.152998 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 13 00:34:58.156408 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 13 00:34:58.156537 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 13 00:34:58.158746 ignition[1109]: INFO : Ignition 2.22.0 Mar 13 00:34:58.158746 ignition[1109]: INFO : Stage: umount Mar 13 00:34:58.160623 ignition[1109]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:34:58.160623 ignition[1109]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 13 00:34:58.160623 ignition[1109]: INFO : umount: umount passed Mar 13 00:34:58.160623 ignition[1109]: INFO : Ignition finished successfully Mar 13 00:34:58.161428 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 13 00:34:58.161537 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 13 00:34:58.163030 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 13 00:34:58.163084 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 13 00:34:58.163826 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 13 00:34:58.163867 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 13 00:34:58.164573 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 13 00:34:58.164616 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 13 00:34:58.165208 systemd[1]: Stopped target network.target - Network. Mar 13 00:34:58.165827 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 13 00:34:58.165868 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 13 00:34:58.166559 systemd[1]: Stopped target paths.target - Path Units. Mar 13 00:34:58.167168 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 13 00:34:58.171198 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:34:58.171595 systemd[1]: Stopped target slices.target - Slice Units. Mar 13 00:34:58.172228 systemd[1]: Stopped target sockets.target - Socket Units. Mar 13 00:34:58.172960 systemd[1]: iscsid.socket: Deactivated successfully. Mar 13 00:34:58.172998 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 13 00:34:58.173718 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 13 00:34:58.173751 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 13 00:34:58.174473 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 13 00:34:58.174543 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 13 00:34:58.175119 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 13 00:34:58.175168 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 13 00:34:58.175786 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 13 00:34:58.175827 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 13 00:34:58.176499 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 13 00:34:58.177108 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 13 00:34:58.183028 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 13 00:34:58.183154 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 13 00:34:58.185995 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 13 00:34:58.186321 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 13 00:34:58.186360 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:34:58.187673 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:34:58.189710 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 13 00:34:58.189814 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 13 00:34:58.191278 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 13 00:34:58.191407 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 13 00:34:58.191996 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 13 00:34:58.192025 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:34:58.194254 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 13 00:34:58.194920 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 13 00:34:58.195323 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 13 00:34:58.196023 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 13 00:34:58.196432 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:34:58.197247 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 13 00:34:58.197294 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 13 00:34:58.197999 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:34:58.201178 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 13 00:34:58.213501 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 13 00:34:58.213669 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:34:58.215792 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 13 00:34:58.215855 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 13 00:34:58.216296 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 13 00:34:58.216326 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:34:58.217252 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 13 00:34:58.217295 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 13 00:34:58.218337 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 13 00:34:58.218375 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 13 00:34:58.219402 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 13 00:34:58.219447 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 00:34:58.221035 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 13 00:34:58.222212 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 13 00:34:58.222269 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:34:58.223744 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 13 00:34:58.223783 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:34:58.224644 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:34:58.224688 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:34:58.226434 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 13 00:34:58.229265 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 13 00:34:58.235364 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 13 00:34:58.235466 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 13 00:34:58.236619 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 13 00:34:58.237707 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 13 00:34:58.249746 systemd[1]: Switching root. Mar 13 00:34:58.282419 systemd-journald[197]: Journal stopped Mar 13 00:34:59.452418 systemd-journald[197]: Received SIGTERM from PID 1 (systemd). Mar 13 00:34:59.452483 kernel: SELinux: policy capability network_peer_controls=1 Mar 13 00:34:59.452498 kernel: SELinux: policy capability open_perms=1 Mar 13 00:34:59.452521 kernel: SELinux: policy capability extended_socket_class=1 Mar 13 00:34:59.452532 kernel: SELinux: policy capability always_check_network=0 Mar 13 00:34:59.452541 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 13 00:34:59.452549 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 13 00:34:59.452558 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 13 00:34:59.452567 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 13 00:34:59.452575 kernel: SELinux: policy capability userspace_initial_context=0 Mar 13 00:34:59.452586 kernel: audit: type=1403 audit(1773362098.488:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 13 00:34:59.452595 systemd[1]: Successfully loaded SELinux policy in 77.191ms. Mar 13 00:34:59.452616 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.012ms. Mar 13 00:34:59.452625 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 13 00:34:59.452635 systemd[1]: Detected virtualization kvm. Mar 13 00:34:59.452644 systemd[1]: Detected architecture x86-64. Mar 13 00:34:59.452652 systemd[1]: Detected first boot. Mar 13 00:34:59.452661 systemd[1]: Hostname set to . Mar 13 00:34:59.452672 systemd[1]: Initializing machine ID from VM UUID. Mar 13 00:34:59.452681 zram_generator::config[1153]: No configuration found. Mar 13 00:34:59.452691 kernel: Guest personality initialized and is inactive Mar 13 00:34:59.452700 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Mar 13 00:34:59.452710 kernel: Initialized host personality Mar 13 00:34:59.452718 kernel: NET: Registered PF_VSOCK protocol family Mar 13 00:34:59.452727 systemd[1]: Populated /etc with preset unit settings. Mar 13 00:34:59.452736 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 13 00:34:59.452745 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 13 00:34:59.452755 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 13 00:34:59.452767 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 13 00:34:59.452776 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 13 00:34:59.452784 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 13 00:34:59.452793 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 13 00:34:59.452802 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 13 00:34:59.452811 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 13 00:34:59.452822 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 13 00:34:59.452831 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 13 00:34:59.452840 systemd[1]: Created slice user.slice - User and Session Slice. Mar 13 00:34:59.452848 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:34:59.452858 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:34:59.452866 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 13 00:34:59.452876 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 13 00:34:59.452886 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 13 00:34:59.452895 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 13 00:34:59.452905 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 13 00:34:59.452914 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:34:59.452923 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:34:59.452932 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 13 00:34:59.452941 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 13 00:34:59.452950 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 13 00:34:59.452960 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 13 00:34:59.452969 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:34:59.452978 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 13 00:34:59.452987 systemd[1]: Reached target slices.target - Slice Units. Mar 13 00:34:59.453000 systemd[1]: Reached target swap.target - Swaps. Mar 13 00:34:59.453008 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 13 00:34:59.453017 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 13 00:34:59.453026 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 13 00:34:59.453035 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:34:59.453043 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 13 00:34:59.453054 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:34:59.453063 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 13 00:34:59.453073 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 13 00:34:59.453081 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 13 00:34:59.453090 systemd[1]: Mounting media.mount - External Media Directory... Mar 13 00:34:59.453100 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:34:59.453109 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 13 00:34:59.453118 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 13 00:34:59.453128 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 13 00:34:59.453137 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 13 00:34:59.453412 systemd[1]: Reached target machines.target - Containers. Mar 13 00:34:59.453423 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 13 00:34:59.453432 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:34:59.453441 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 13 00:34:59.453451 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 13 00:34:59.453460 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:34:59.453469 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 13 00:34:59.453480 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 00:34:59.453489 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 13 00:34:59.453498 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 00:34:59.453517 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 13 00:34:59.453526 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 13 00:34:59.453534 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 13 00:34:59.453543 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 13 00:34:59.453552 systemd[1]: Stopped systemd-fsck-usr.service. Mar 13 00:34:59.453563 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:34:59.453572 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 13 00:34:59.453585 kernel: fuse: init (API version 7.41) Mar 13 00:34:59.453594 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 13 00:34:59.453603 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 13 00:34:59.453619 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 13 00:34:59.453628 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 13 00:34:59.453639 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 13 00:34:59.453648 systemd[1]: verity-setup.service: Deactivated successfully. Mar 13 00:34:59.453657 systemd[1]: Stopped verity-setup.service. Mar 13 00:34:59.453666 kernel: loop: module loaded Mar 13 00:34:59.453676 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:34:59.453685 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 13 00:34:59.453694 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 13 00:34:59.453722 systemd-journald[1234]: Collecting audit messages is disabled. Mar 13 00:34:59.453741 systemd[1]: Mounted media.mount - External Media Directory. Mar 13 00:34:59.453750 kernel: ACPI: bus type drm_connector registered Mar 13 00:34:59.453758 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 13 00:34:59.453770 systemd-journald[1234]: Journal started Mar 13 00:34:59.453786 systemd-journald[1234]: Runtime Journal (/run/log/journal/6a54f19cc7264270a6ab022a40ad45b4) is 8M, max 76.1M, 68.1M free. Mar 13 00:34:59.109766 systemd[1]: Queued start job for default target multi-user.target. Mar 13 00:34:59.137359 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 13 00:34:59.137969 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 13 00:34:59.459204 systemd[1]: Started systemd-journald.service - Journal Service. Mar 13 00:34:59.461975 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 13 00:34:59.462450 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 13 00:34:59.463157 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 13 00:34:59.463955 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:34:59.465589 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 13 00:34:59.465753 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 13 00:34:59.466381 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:34:59.466580 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:34:59.468471 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 13 00:34:59.468698 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 13 00:34:59.469306 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 00:34:59.469452 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 00:34:59.470084 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 13 00:34:59.470451 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 13 00:34:59.471125 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 00:34:59.471333 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 00:34:59.472023 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 13 00:34:59.472797 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:34:59.473443 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 13 00:34:59.474115 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 13 00:34:59.483180 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 13 00:34:59.487217 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 13 00:34:59.490205 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 13 00:34:59.491195 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 13 00:34:59.491216 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 13 00:34:59.492888 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 13 00:34:59.499231 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 13 00:34:59.499687 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:34:59.502289 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 13 00:34:59.504473 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 13 00:34:59.505122 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 13 00:34:59.506233 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 13 00:34:59.506606 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 13 00:34:59.511291 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 00:34:59.514175 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 13 00:34:59.515726 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 13 00:34:59.518888 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 13 00:34:59.520226 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 13 00:34:59.536478 systemd-journald[1234]: Time spent on flushing to /var/log/journal/6a54f19cc7264270a6ab022a40ad45b4 is 24.781ms for 1245 entries. Mar 13 00:34:59.536478 systemd-journald[1234]: System Journal (/var/log/journal/6a54f19cc7264270a6ab022a40ad45b4) is 8M, max 584.8M, 576.8M free. Mar 13 00:34:59.586464 systemd-journald[1234]: Received client request to flush runtime journal. Mar 13 00:34:59.586539 kernel: loop0: detected capacity change from 0 to 110984 Mar 13 00:34:59.547811 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 13 00:34:59.548323 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 13 00:34:59.549997 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 13 00:34:59.598325 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 13 00:34:59.587926 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 13 00:34:59.597028 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:34:59.612063 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 13 00:34:59.622538 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 13 00:34:59.624259 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 13 00:34:59.631192 kernel: loop1: detected capacity change from 0 to 219192 Mar 13 00:34:59.669702 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. Mar 13 00:34:59.670075 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. Mar 13 00:34:59.672226 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:34:59.682252 kernel: loop2: detected capacity change from 0 to 8 Mar 13 00:34:59.682549 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:34:59.701170 kernel: loop3: detected capacity change from 0 to 128560 Mar 13 00:34:59.749162 kernel: loop4: detected capacity change from 0 to 110984 Mar 13 00:34:59.766169 kernel: loop5: detected capacity change from 0 to 219192 Mar 13 00:34:59.789189 kernel: loop6: detected capacity change from 0 to 8 Mar 13 00:34:59.793170 kernel: loop7: detected capacity change from 0 to 128560 Mar 13 00:34:59.808243 (sd-merge)[1304]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Mar 13 00:34:59.810736 (sd-merge)[1304]: Merged extensions into '/usr'. Mar 13 00:34:59.818832 systemd[1]: Reload requested from client PID 1278 ('systemd-sysext') (unit systemd-sysext.service)... Mar 13 00:34:59.818933 systemd[1]: Reloading... Mar 13 00:34:59.909367 zram_generator::config[1326]: No configuration found. Mar 13 00:34:59.938748 ldconfig[1273]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 13 00:35:00.066487 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 13 00:35:00.066765 systemd[1]: Reloading finished in 247 ms. Mar 13 00:35:00.095842 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 13 00:35:00.096650 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 13 00:35:00.100451 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 13 00:35:00.106373 systemd[1]: Starting ensure-sysext.service... Mar 13 00:35:00.107690 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 13 00:35:00.111477 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:35:00.136902 systemd[1]: Reload requested from client PID 1374 ('systemctl') (unit ensure-sysext.service)... Mar 13 00:35:00.136991 systemd[1]: Reloading... Mar 13 00:35:00.147761 systemd-tmpfiles[1375]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 13 00:35:00.147790 systemd-tmpfiles[1375]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 13 00:35:00.148020 systemd-tmpfiles[1375]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 13 00:35:00.148246 systemd-tmpfiles[1375]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 13 00:35:00.148980 systemd-tmpfiles[1375]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 13 00:35:00.149338 systemd-tmpfiles[1375]: ACLs are not supported, ignoring. Mar 13 00:35:00.149436 systemd-tmpfiles[1375]: ACLs are not supported, ignoring. Mar 13 00:35:00.153709 systemd-tmpfiles[1375]: Detected autofs mount point /boot during canonicalization of boot. Mar 13 00:35:00.153782 systemd-tmpfiles[1375]: Skipping /boot Mar 13 00:35:00.166113 systemd-tmpfiles[1375]: Detected autofs mount point /boot during canonicalization of boot. Mar 13 00:35:00.166126 systemd-tmpfiles[1375]: Skipping /boot Mar 13 00:35:00.167004 systemd-udevd[1376]: Using default interface naming scheme 'v255'. Mar 13 00:35:00.241167 zram_generator::config[1435]: No configuration found. Mar 13 00:35:00.405183 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input5 Mar 13 00:35:00.414173 kernel: mousedev: PS/2 mouse device common for all mice Mar 13 00:35:00.449900 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 13 00:35:00.450013 systemd[1]: Reloading finished in 312 ms. Mar 13 00:35:00.459224 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:35:00.464162 kernel: ACPI: button: Power Button [PWRF] Mar 13 00:35:00.468562 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:35:00.477614 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Mar 13 00:35:00.477765 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:35:00.480011 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 13 00:35:00.484840 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 13 00:35:00.486302 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:35:00.490111 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:35:00.495841 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 00:35:00.499915 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 00:35:00.500677 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:35:00.500921 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:35:00.504313 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 13 00:35:00.510470 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 13 00:35:00.515842 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 13 00:35:00.521299 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 13 00:35:00.522199 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:35:00.526665 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 00:35:00.526846 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 00:35:00.531788 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 13 00:35:00.537127 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:35:00.537596 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:35:00.551124 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 00:35:00.552258 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:35:00.552343 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:35:00.552406 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:35:00.552968 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:35:00.554242 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:35:00.563758 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 13 00:35:00.568252 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:35:00.568407 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:35:00.577462 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:35:00.578032 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:35:00.578114 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:35:00.578214 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:35:00.586000 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:35:00.587649 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:35:00.594245 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 13 00:35:00.594726 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:35:00.594838 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:35:00.594957 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:35:00.595950 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 00:35:00.596792 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 00:35:00.598650 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 00:35:00.598819 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 00:35:00.600800 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 13 00:35:00.603289 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 13 00:35:00.610493 systemd[1]: Finished ensure-sysext.service. Mar 13 00:35:00.612600 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:35:00.612778 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:35:00.623762 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 13 00:35:00.623992 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 13 00:35:00.624125 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 13 00:35:00.621614 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 13 00:35:00.628660 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 13 00:35:00.637379 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 13 00:35:00.637826 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 13 00:35:00.639274 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 13 00:35:00.639950 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 13 00:35:00.641183 augenrules[1544]: No rules Mar 13 00:35:00.640774 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 13 00:35:00.644014 systemd[1]: audit-rules.service: Deactivated successfully. Mar 13 00:35:00.645390 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 13 00:35:00.652137 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 13 00:35:00.674516 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 13 00:35:00.704629 kernel: EDAC MC: Ver: 3.0.0 Mar 13 00:35:00.712165 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Mar 13 00:35:00.726625 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 13 00:35:00.748695 kernel: Console: switching to colour dummy device 80x25 Mar 13 00:35:00.754164 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Mar 13 00:35:00.763047 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 13 00:35:00.770316 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Mar 13 00:35:00.770363 kernel: [drm] features: -context_init Mar 13 00:35:00.777321 kernel: [drm] number of scanouts: 1 Mar 13 00:35:00.777354 kernel: [drm] number of cap sets: 0 Mar 13 00:35:00.780163 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Mar 13 00:35:00.782162 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Mar 13 00:35:00.787563 kernel: Console: switching to colour frame buffer device 160x50 Mar 13 00:35:00.797378 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 13 00:35:00.799033 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Mar 13 00:35:00.801133 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:35:00.826657 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:35:00.826924 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:35:00.830359 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:35:00.839294 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 13 00:35:00.924956 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:35:00.941579 systemd-networkd[1503]: lo: Link UP Mar 13 00:35:00.941586 systemd-networkd[1503]: lo: Gained carrier Mar 13 00:35:00.944136 systemd-networkd[1503]: Enumeration completed Mar 13 00:35:00.944255 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 13 00:35:00.946952 systemd-networkd[1503]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:35:00.946964 systemd-networkd[1503]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 00:35:00.948256 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 13 00:35:00.950096 systemd-networkd[1503]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:35:00.950105 systemd-networkd[1503]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 00:35:00.950206 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 13 00:35:00.950484 systemd-networkd[1503]: eth0: Link UP Mar 13 00:35:00.950652 systemd-networkd[1503]: eth0: Gained carrier Mar 13 00:35:00.950663 systemd-networkd[1503]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:35:00.953303 systemd-networkd[1503]: eth1: Link UP Mar 13 00:35:00.953766 systemd-networkd[1503]: eth1: Gained carrier Mar 13 00:35:00.953777 systemd-networkd[1503]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:35:00.965981 systemd-resolved[1504]: Positive Trust Anchors: Mar 13 00:35:00.967245 systemd-resolved[1504]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 13 00:35:00.967300 systemd-resolved[1504]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 13 00:35:00.971737 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 13 00:35:00.971874 systemd[1]: Reached target time-set.target - System Time Set. Mar 13 00:35:00.972265 systemd-resolved[1504]: Using system hostname 'ci-4459-2-4-n-a4844b4806'. Mar 13 00:35:00.974598 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 13 00:35:00.976210 systemd-networkd[1503]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Mar 13 00:35:00.976692 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 13 00:35:00.976755 systemd-timesyncd[1545]: Network configuration changed, trying to establish connection. Mar 13 00:35:00.976859 systemd[1]: Reached target network.target - Network. Mar 13 00:35:00.976907 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:35:00.976962 systemd[1]: Reached target sysinit.target - System Initialization. Mar 13 00:35:00.977092 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 13 00:35:00.977560 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 13 00:35:00.978498 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Mar 13 00:35:00.978719 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 13 00:35:00.978843 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 13 00:35:00.978901 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 13 00:35:00.978951 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 13 00:35:00.978967 systemd[1]: Reached target paths.target - Path Units. Mar 13 00:35:00.979448 systemd[1]: Reached target timers.target - Timer Units. Mar 13 00:35:00.980944 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 13 00:35:00.984524 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 13 00:35:00.988586 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 13 00:35:00.991981 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 13 00:35:00.993936 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 13 00:35:01.002184 systemd-networkd[1503]: eth0: DHCPv4 address 89.167.5.55/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 13 00:35:01.002735 systemd-timesyncd[1545]: Network configuration changed, trying to establish connection. Mar 13 00:35:01.004852 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 13 00:35:01.007313 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 13 00:35:01.009970 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 13 00:35:01.012891 systemd[1]: Reached target sockets.target - Socket Units. Mar 13 00:35:01.014604 systemd[1]: Reached target basic.target - Basic System. Mar 13 00:35:01.015591 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 13 00:35:01.015617 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 13 00:35:01.016569 systemd[1]: Starting containerd.service - containerd container runtime... Mar 13 00:35:01.020247 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 13 00:35:01.028241 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 13 00:35:01.030960 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 13 00:35:01.042043 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 13 00:35:01.044692 coreos-metadata[1589]: Mar 13 00:35:01.044 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Mar 13 00:35:01.045256 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 13 00:35:01.049406 coreos-metadata[1589]: Mar 13 00:35:01.046 INFO Fetch successful Mar 13 00:35:01.049406 coreos-metadata[1589]: Mar 13 00:35:01.046 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Mar 13 00:35:01.049406 coreos-metadata[1589]: Mar 13 00:35:01.047 INFO Fetch successful Mar 13 00:35:01.045675 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 13 00:35:01.047241 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Mar 13 00:35:01.050893 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 13 00:35:01.056451 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 13 00:35:01.061537 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Mar 13 00:35:01.065302 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 13 00:35:01.066460 oslogin_cache_refresh[1596]: Refreshing passwd entry cache Mar 13 00:35:01.066952 google_oslogin_nss_cache[1596]: oslogin_cache_refresh[1596]: Refreshing passwd entry cache Mar 13 00:35:01.067781 jq[1594]: false Mar 13 00:35:01.069716 google_oslogin_nss_cache[1596]: oslogin_cache_refresh[1596]: Failure getting users, quitting Mar 13 00:35:01.069716 google_oslogin_nss_cache[1596]: oslogin_cache_refresh[1596]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 13 00:35:01.069716 google_oslogin_nss_cache[1596]: oslogin_cache_refresh[1596]: Refreshing group entry cache Mar 13 00:35:01.068069 oslogin_cache_refresh[1596]: Failure getting users, quitting Mar 13 00:35:01.068082 oslogin_cache_refresh[1596]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 13 00:35:01.068114 oslogin_cache_refresh[1596]: Refreshing group entry cache Mar 13 00:35:01.071426 google_oslogin_nss_cache[1596]: oslogin_cache_refresh[1596]: Failure getting groups, quitting Mar 13 00:35:01.071426 google_oslogin_nss_cache[1596]: oslogin_cache_refresh[1596]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 13 00:35:01.071032 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 13 00:35:01.070255 oslogin_cache_refresh[1596]: Failure getting groups, quitting Mar 13 00:35:01.070263 oslogin_cache_refresh[1596]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 13 00:35:01.076376 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 13 00:35:01.081446 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 13 00:35:01.081863 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 13 00:35:01.085037 systemd[1]: Starting update-engine.service - Update Engine... Mar 13 00:35:01.093464 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 13 00:35:01.107002 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 13 00:35:01.108035 extend-filesystems[1595]: Found /dev/sda6 Mar 13 00:35:01.108800 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 13 00:35:01.109024 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 13 00:35:01.109312 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Mar 13 00:35:01.109533 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Mar 13 00:35:01.122665 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 13 00:35:01.122881 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 13 00:35:01.138186 update_engine[1606]: I20260313 00:35:01.131886 1606 main.cc:92] Flatcar Update Engine starting Mar 13 00:35:01.138432 extend-filesystems[1595]: Found /dev/sda9 Mar 13 00:35:01.151305 extend-filesystems[1595]: Checking size of /dev/sda9 Mar 13 00:35:01.143758 systemd[1]: motdgen.service: Deactivated successfully. Mar 13 00:35:01.146222 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 13 00:35:01.157738 jq[1615]: true Mar 13 00:35:01.162491 (ntainerd)[1630]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 13 00:35:01.163663 extend-filesystems[1595]: Resized partition /dev/sda9 Mar 13 00:35:01.181249 extend-filesystems[1642]: resize2fs 1.47.3 (8-Jul-2025) Mar 13 00:35:01.187418 tar[1621]: linux-amd64/LICENSE Mar 13 00:35:01.187418 tar[1621]: linux-amd64/helm Mar 13 00:35:01.189750 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 19393531 blocks Mar 13 00:35:01.213158 sshd_keygen[1616]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 13 00:35:01.222226 dbus-daemon[1590]: [system] SELinux support is enabled Mar 13 00:35:01.224315 jq[1639]: true Mar 13 00:35:01.224291 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 13 00:35:01.231008 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 13 00:35:01.231039 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 13 00:35:01.235055 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 13 00:35:01.235074 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 13 00:35:01.260664 update_engine[1606]: I20260313 00:35:01.257629 1606 update_check_scheduler.cc:74] Next update check in 7m30s Mar 13 00:35:01.258189 systemd[1]: Started update-engine.service - Update Engine. Mar 13 00:35:01.265984 systemd-logind[1603]: New seat seat0. Mar 13 00:35:01.267926 systemd-logind[1603]: Watching system buttons on /dev/input/event3 (Power Button) Mar 13 00:35:01.267943 systemd-logind[1603]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 13 00:35:01.278506 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 13 00:35:01.280669 systemd[1]: Started systemd-logind.service - User Login Management. Mar 13 00:35:01.281349 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 13 00:35:01.286524 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 13 00:35:01.338128 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 13 00:35:01.343766 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 13 00:35:01.353848 bash[1677]: Updated "/home/core/.ssh/authorized_keys" Mar 13 00:35:01.357074 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 13 00:35:01.373291 systemd[1]: Starting sshkeys.service... Mar 13 00:35:01.394954 systemd[1]: issuegen.service: Deactivated successfully. Mar 13 00:35:01.396430 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 13 00:35:01.401562 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 13 00:35:01.407957 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 13 00:35:01.410840 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 13 00:35:01.427174 kernel: EXT4-fs (sda9): resized filesystem to 19393531 Mar 13 00:35:01.446281 extend-filesystems[1642]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Mar 13 00:35:01.446281 extend-filesystems[1642]: old_desc_blocks = 1, new_desc_blocks = 10 Mar 13 00:35:01.446281 extend-filesystems[1642]: The filesystem on /dev/sda9 is now 19393531 (4k) blocks long. Mar 13 00:35:01.449468 extend-filesystems[1595]: Resized filesystem in /dev/sda9 Mar 13 00:35:01.450741 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 13 00:35:01.451013 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 13 00:35:01.461573 coreos-metadata[1689]: Mar 13 00:35:01.460 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Mar 13 00:35:01.461573 coreos-metadata[1689]: Mar 13 00:35:01.461 INFO Fetch successful Mar 13 00:35:01.453697 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 13 00:35:01.462862 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 13 00:35:01.465084 unknown[1689]: wrote ssh authorized keys file for user: core Mar 13 00:35:01.467383 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 13 00:35:01.468854 systemd[1]: Reached target getty.target - Login Prompts. Mar 13 00:35:01.494048 update-ssh-keys[1697]: Updated "/home/core/.ssh/authorized_keys" Mar 13 00:35:01.495898 containerd[1630]: time="2026-03-13T00:35:01Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 13 00:35:01.498915 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 13 00:35:01.499488 containerd[1630]: time="2026-03-13T00:35:01.499467469Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 13 00:35:01.509598 systemd[1]: Finished sshkeys.service. Mar 13 00:35:01.511251 containerd[1630]: time="2026-03-13T00:35:01.510593679Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.91µs" Mar 13 00:35:01.511251 containerd[1630]: time="2026-03-13T00:35:01.510626159Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 13 00:35:01.511251 containerd[1630]: time="2026-03-13T00:35:01.510642949Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 13 00:35:01.511251 containerd[1630]: time="2026-03-13T00:35:01.510783699Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 13 00:35:01.511251 containerd[1630]: time="2026-03-13T00:35:01.510793869Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 13 00:35:01.511251 containerd[1630]: time="2026-03-13T00:35:01.510815179Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 13 00:35:01.511251 containerd[1630]: time="2026-03-13T00:35:01.510863539Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 13 00:35:01.511251 containerd[1630]: time="2026-03-13T00:35:01.510873809Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 13 00:35:01.511251 containerd[1630]: time="2026-03-13T00:35:01.511078019Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 13 00:35:01.511251 containerd[1630]: time="2026-03-13T00:35:01.511087419Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 13 00:35:01.511251 containerd[1630]: time="2026-03-13T00:35:01.511098679Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 13 00:35:01.511251 containerd[1630]: time="2026-03-13T00:35:01.511107039Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 13 00:35:01.512176 containerd[1630]: time="2026-03-13T00:35:01.511488939Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 13 00:35:01.512176 containerd[1630]: time="2026-03-13T00:35:01.511690299Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 13 00:35:01.512176 containerd[1630]: time="2026-03-13T00:35:01.511717059Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 13 00:35:01.512176 containerd[1630]: time="2026-03-13T00:35:01.511724759Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 13 00:35:01.512176 containerd[1630]: time="2026-03-13T00:35:01.511746749Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 13 00:35:01.512176 containerd[1630]: time="2026-03-13T00:35:01.512105239Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 13 00:35:01.512365 containerd[1630]: time="2026-03-13T00:35:01.512352969Z" level=info msg="metadata content store policy set" policy=shared Mar 13 00:35:01.517794 containerd[1630]: time="2026-03-13T00:35:01.517775469Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 13 00:35:01.517861 containerd[1630]: time="2026-03-13T00:35:01.517852479Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 13 00:35:01.517922 containerd[1630]: time="2026-03-13T00:35:01.517914359Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 13 00:35:01.517967 containerd[1630]: time="2026-03-13T00:35:01.517960149Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 13 00:35:01.518004 containerd[1630]: time="2026-03-13T00:35:01.517996549Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 13 00:35:01.518032 containerd[1630]: time="2026-03-13T00:35:01.518025249Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 13 00:35:01.518064 containerd[1630]: time="2026-03-13T00:35:01.518056909Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 13 00:35:01.518096 containerd[1630]: time="2026-03-13T00:35:01.518089649Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 13 00:35:01.518123 containerd[1630]: time="2026-03-13T00:35:01.518116839Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 13 00:35:01.518176 containerd[1630]: time="2026-03-13T00:35:01.518168809Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 13 00:35:01.518205 containerd[1630]: time="2026-03-13T00:35:01.518198879Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 13 00:35:01.518482 containerd[1630]: time="2026-03-13T00:35:01.518241909Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 13 00:35:01.518482 containerd[1630]: time="2026-03-13T00:35:01.518328259Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 13 00:35:01.518482 containerd[1630]: time="2026-03-13T00:35:01.518340909Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 13 00:35:01.518482 containerd[1630]: time="2026-03-13T00:35:01.518351199Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 13 00:35:01.518482 containerd[1630]: time="2026-03-13T00:35:01.518364739Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 13 00:35:01.518482 containerd[1630]: time="2026-03-13T00:35:01.518374839Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 13 00:35:01.518482 containerd[1630]: time="2026-03-13T00:35:01.518382429Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 13 00:35:01.518482 containerd[1630]: time="2026-03-13T00:35:01.518390539Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 13 00:35:01.518482 containerd[1630]: time="2026-03-13T00:35:01.518398049Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 13 00:35:01.518482 containerd[1630]: time="2026-03-13T00:35:01.518405929Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 13 00:35:01.518482 containerd[1630]: time="2026-03-13T00:35:01.518413459Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 13 00:35:01.518482 containerd[1630]: time="2026-03-13T00:35:01.518421109Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 13 00:35:01.518482 containerd[1630]: time="2026-03-13T00:35:01.518456319Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 13 00:35:01.518482 containerd[1630]: time="2026-03-13T00:35:01.518464849Z" level=info msg="Start snapshots syncer" Mar 13 00:35:01.518712 containerd[1630]: time="2026-03-13T00:35:01.518702019Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 13 00:35:01.518979 containerd[1630]: time="2026-03-13T00:35:01.518956899Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 13 00:35:01.519114 containerd[1630]: time="2026-03-13T00:35:01.519103979Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 13 00:35:01.520085 containerd[1630]: time="2026-03-13T00:35:01.520071529Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 13 00:35:01.520258 containerd[1630]: time="2026-03-13T00:35:01.520245179Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 13 00:35:01.520309 containerd[1630]: time="2026-03-13T00:35:01.520301429Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 13 00:35:01.520351 containerd[1630]: time="2026-03-13T00:35:01.520343259Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 13 00:35:01.520379 containerd[1630]: time="2026-03-13T00:35:01.520372359Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 13 00:35:01.520408 containerd[1630]: time="2026-03-13T00:35:01.520401739Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 13 00:35:01.520435 containerd[1630]: time="2026-03-13T00:35:01.520428489Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 13 00:35:01.520461 containerd[1630]: time="2026-03-13T00:35:01.520454929Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 13 00:35:01.520534 containerd[1630]: time="2026-03-13T00:35:01.520489289Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 13 00:35:01.520534 containerd[1630]: time="2026-03-13T00:35:01.520508929Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 13 00:35:01.520534 containerd[1630]: time="2026-03-13T00:35:01.520517139Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 13 00:35:01.520601 containerd[1630]: time="2026-03-13T00:35:01.520592429Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 13 00:35:01.520653 containerd[1630]: time="2026-03-13T00:35:01.520629069Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 13 00:35:01.520835 containerd[1630]: time="2026-03-13T00:35:01.520637979Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 13 00:35:01.520835 containerd[1630]: time="2026-03-13T00:35:01.520725269Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 13 00:35:01.520835 containerd[1630]: time="2026-03-13T00:35:01.520731579Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 13 00:35:01.520835 containerd[1630]: time="2026-03-13T00:35:01.520744519Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 13 00:35:01.520835 containerd[1630]: time="2026-03-13T00:35:01.520761199Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 13 00:35:01.520835 containerd[1630]: time="2026-03-13T00:35:01.520777869Z" level=info msg="runtime interface created" Mar 13 00:35:01.520835 containerd[1630]: time="2026-03-13T00:35:01.520783499Z" level=info msg="created NRI interface" Mar 13 00:35:01.520835 containerd[1630]: time="2026-03-13T00:35:01.520790419Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 13 00:35:01.520835 containerd[1630]: time="2026-03-13T00:35:01.520798379Z" level=info msg="Connect containerd service" Mar 13 00:35:01.520835 containerd[1630]: time="2026-03-13T00:35:01.520811939Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 13 00:35:01.521679 containerd[1630]: time="2026-03-13T00:35:01.521661999Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 13 00:35:01.529068 locksmithd[1656]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 13 00:35:01.583092 containerd[1630]: time="2026-03-13T00:35:01.583043299Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 13 00:35:01.583211 containerd[1630]: time="2026-03-13T00:35:01.583109509Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 13 00:35:01.583211 containerd[1630]: time="2026-03-13T00:35:01.583132259Z" level=info msg="Start subscribing containerd event" Mar 13 00:35:01.584214 containerd[1630]: time="2026-03-13T00:35:01.584185419Z" level=info msg="Start recovering state" Mar 13 00:35:01.584281 containerd[1630]: time="2026-03-13T00:35:01.584266179Z" level=info msg="Start event monitor" Mar 13 00:35:01.584298 containerd[1630]: time="2026-03-13T00:35:01.584281049Z" level=info msg="Start cni network conf syncer for default" Mar 13 00:35:01.584298 containerd[1630]: time="2026-03-13T00:35:01.584288569Z" level=info msg="Start streaming server" Mar 13 00:35:01.584298 containerd[1630]: time="2026-03-13T00:35:01.584295079Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 13 00:35:01.584336 containerd[1630]: time="2026-03-13T00:35:01.584300999Z" level=info msg="runtime interface starting up..." Mar 13 00:35:01.584336 containerd[1630]: time="2026-03-13T00:35:01.584306089Z" level=info msg="starting plugins..." Mar 13 00:35:01.584336 containerd[1630]: time="2026-03-13T00:35:01.584317939Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 13 00:35:01.584433 containerd[1630]: time="2026-03-13T00:35:01.584417069Z" level=info msg="containerd successfully booted in 0.088879s" Mar 13 00:35:01.584629 systemd[1]: Started containerd.service - containerd container runtime. Mar 13 00:35:01.650306 tar[1621]: linux-amd64/README.md Mar 13 00:35:01.665008 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 13 00:35:01.831021 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 13 00:35:01.834958 systemd[1]: Started sshd@0-89.167.5.55:22-4.153.228.146:36438.service - OpenSSH per-connection server daemon (4.153.228.146:36438). Mar 13 00:35:02.512799 sshd[1724]: Accepted publickey for core from 4.153.228.146 port 36438 ssh2: RSA SHA256:ihdQa0i/HnNGvKP5m9obD9eorZ8Lhhc0yafWx7ReGkQ Mar 13 00:35:02.516419 sshd-session[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:35:02.527913 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 13 00:35:02.532742 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 13 00:35:02.550417 systemd-logind[1603]: New session 1 of user core. Mar 13 00:35:02.564045 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 13 00:35:02.573578 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 13 00:35:02.591116 (systemd)[1729]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 13 00:35:02.595781 systemd-logind[1603]: New session c1 of user core. Mar 13 00:35:02.739213 systemd[1729]: Queued start job for default target default.target. Mar 13 00:35:02.750161 systemd[1729]: Created slice app.slice - User Application Slice. Mar 13 00:35:02.750181 systemd[1729]: Reached target paths.target - Paths. Mar 13 00:35:02.750214 systemd[1729]: Reached target timers.target - Timers. Mar 13 00:35:02.751553 systemd[1729]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 13 00:35:02.770262 systemd[1729]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 13 00:35:02.770353 systemd[1729]: Reached target sockets.target - Sockets. Mar 13 00:35:02.770385 systemd[1729]: Reached target basic.target - Basic System. Mar 13 00:35:02.770419 systemd[1729]: Reached target default.target - Main User Target. Mar 13 00:35:02.770446 systemd[1729]: Startup finished in 163ms. Mar 13 00:35:02.770733 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 13 00:35:02.786266 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 13 00:35:02.836425 systemd-networkd[1503]: eth1: Gained IPv6LL Mar 13 00:35:02.837382 systemd-timesyncd[1545]: Network configuration changed, trying to establish connection. Mar 13 00:35:02.841445 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 13 00:35:02.847429 systemd[1]: Reached target network-online.target - Network is Online. Mar 13 00:35:02.855742 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:35:02.871602 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 13 00:35:02.910881 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 13 00:35:02.964272 systemd-networkd[1503]: eth0: Gained IPv6LL Mar 13 00:35:02.964962 systemd-timesyncd[1545]: Network configuration changed, trying to establish connection. Mar 13 00:35:03.164908 systemd[1]: Started sshd@1-89.167.5.55:22-4.153.228.146:36454.service - OpenSSH per-connection server daemon (4.153.228.146:36454). Mar 13 00:35:03.592963 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:35:03.598487 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 13 00:35:03.604669 systemd[1]: Startup finished in 2.893s (kernel) + 5.810s (initrd) + 5.192s (userspace) = 13.896s. Mar 13 00:35:03.606238 (kubelet)[1760]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:35:03.822911 sshd[1752]: Accepted publickey for core from 4.153.228.146 port 36454 ssh2: RSA SHA256:ihdQa0i/HnNGvKP5m9obD9eorZ8Lhhc0yafWx7ReGkQ Mar 13 00:35:03.826366 sshd-session[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:35:03.835731 systemd-logind[1603]: New session 2 of user core. Mar 13 00:35:03.843409 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 13 00:35:04.038686 kubelet[1760]: E0313 00:35:04.038591 1760 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:35:04.041765 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:35:04.042122 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:35:04.042839 systemd[1]: kubelet.service: Consumed 771ms CPU time, 258.2M memory peak. Mar 13 00:35:04.183424 sshd[1769]: Connection closed by 4.153.228.146 port 36454 Mar 13 00:35:04.184567 sshd-session[1752]: pam_unix(sshd:session): session closed for user core Mar 13 00:35:04.189974 systemd[1]: sshd@1-89.167.5.55:22-4.153.228.146:36454.service: Deactivated successfully. Mar 13 00:35:04.193834 systemd[1]: session-2.scope: Deactivated successfully. Mar 13 00:35:04.195219 systemd-logind[1603]: Session 2 logged out. Waiting for processes to exit. Mar 13 00:35:04.197802 systemd-logind[1603]: Removed session 2. Mar 13 00:35:04.319804 systemd[1]: Started sshd@2-89.167.5.55:22-4.153.228.146:36456.service - OpenSSH per-connection server daemon (4.153.228.146:36456). Mar 13 00:35:04.975673 sshd[1777]: Accepted publickey for core from 4.153.228.146 port 36456 ssh2: RSA SHA256:ihdQa0i/HnNGvKP5m9obD9eorZ8Lhhc0yafWx7ReGkQ Mar 13 00:35:04.978258 sshd-session[1777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:35:04.984007 systemd-logind[1603]: New session 3 of user core. Mar 13 00:35:04.991376 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 13 00:35:05.337458 sshd[1780]: Connection closed by 4.153.228.146 port 36456 Mar 13 00:35:05.338661 sshd-session[1777]: pam_unix(sshd:session): session closed for user core Mar 13 00:35:05.346713 systemd[1]: sshd@2-89.167.5.55:22-4.153.228.146:36456.service: Deactivated successfully. Mar 13 00:35:05.350572 systemd[1]: session-3.scope: Deactivated successfully. Mar 13 00:35:05.351861 systemd-logind[1603]: Session 3 logged out. Waiting for processes to exit. Mar 13 00:35:05.354540 systemd-logind[1603]: Removed session 3. Mar 13 00:35:05.466908 systemd[1]: Started sshd@3-89.167.5.55:22-4.153.228.146:36460.service - OpenSSH per-connection server daemon (4.153.228.146:36460). Mar 13 00:35:06.117217 sshd[1786]: Accepted publickey for core from 4.153.228.146 port 36460 ssh2: RSA SHA256:ihdQa0i/HnNGvKP5m9obD9eorZ8Lhhc0yafWx7ReGkQ Mar 13 00:35:06.118943 sshd-session[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:35:06.127809 systemd-logind[1603]: New session 4 of user core. Mar 13 00:35:06.135391 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 13 00:35:06.487006 sshd[1789]: Connection closed by 4.153.228.146 port 36460 Mar 13 00:35:06.488594 sshd-session[1786]: pam_unix(sshd:session): session closed for user core Mar 13 00:35:06.494325 systemd[1]: sshd@3-89.167.5.55:22-4.153.228.146:36460.service: Deactivated successfully. Mar 13 00:35:06.498793 systemd[1]: session-4.scope: Deactivated successfully. Mar 13 00:35:06.501706 systemd-logind[1603]: Session 4 logged out. Waiting for processes to exit. Mar 13 00:35:06.505348 systemd-logind[1603]: Removed session 4. Mar 13 00:35:06.629559 systemd[1]: Started sshd@4-89.167.5.55:22-4.153.228.146:36470.service - OpenSSH per-connection server daemon (4.153.228.146:36470). Mar 13 00:35:07.294123 sshd[1795]: Accepted publickey for core from 4.153.228.146 port 36470 ssh2: RSA SHA256:ihdQa0i/HnNGvKP5m9obD9eorZ8Lhhc0yafWx7ReGkQ Mar 13 00:35:07.295739 sshd-session[1795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:35:07.301515 systemd-logind[1603]: New session 5 of user core. Mar 13 00:35:07.305259 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 13 00:35:07.558114 sudo[1799]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 13 00:35:07.558963 sudo[1799]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:35:07.580584 sudo[1799]: pam_unix(sudo:session): session closed for user root Mar 13 00:35:07.701434 sshd[1798]: Connection closed by 4.153.228.146 port 36470 Mar 13 00:35:07.703531 sshd-session[1795]: pam_unix(sshd:session): session closed for user core Mar 13 00:35:07.708831 systemd[1]: sshd@4-89.167.5.55:22-4.153.228.146:36470.service: Deactivated successfully. Mar 13 00:35:07.711395 systemd[1]: session-5.scope: Deactivated successfully. Mar 13 00:35:07.712507 systemd-logind[1603]: Session 5 logged out. Waiting for processes to exit. Mar 13 00:35:07.715249 systemd-logind[1603]: Removed session 5. Mar 13 00:35:07.834283 systemd[1]: Started sshd@5-89.167.5.55:22-4.153.228.146:35466.service - OpenSSH per-connection server daemon (4.153.228.146:35466). Mar 13 00:35:08.487212 sshd[1805]: Accepted publickey for core from 4.153.228.146 port 35466 ssh2: RSA SHA256:ihdQa0i/HnNGvKP5m9obD9eorZ8Lhhc0yafWx7ReGkQ Mar 13 00:35:08.489680 sshd-session[1805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:35:08.497361 systemd-logind[1603]: New session 6 of user core. Mar 13 00:35:08.516420 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 13 00:35:08.733902 sudo[1810]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 13 00:35:08.734279 sudo[1810]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:35:08.740599 sudo[1810]: pam_unix(sudo:session): session closed for user root Mar 13 00:35:08.750915 sudo[1809]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 13 00:35:08.751551 sudo[1809]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:35:08.763971 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 13 00:35:08.818843 augenrules[1832]: No rules Mar 13 00:35:08.819494 systemd[1]: audit-rules.service: Deactivated successfully. Mar 13 00:35:08.819718 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 13 00:35:08.821110 sudo[1809]: pam_unix(sudo:session): session closed for user root Mar 13 00:35:08.940468 sshd[1808]: Connection closed by 4.153.228.146 port 35466 Mar 13 00:35:08.941526 sshd-session[1805]: pam_unix(sshd:session): session closed for user core Mar 13 00:35:08.948458 systemd[1]: sshd@5-89.167.5.55:22-4.153.228.146:35466.service: Deactivated successfully. Mar 13 00:35:08.952030 systemd[1]: session-6.scope: Deactivated successfully. Mar 13 00:35:08.954276 systemd-logind[1603]: Session 6 logged out. Waiting for processes to exit. Mar 13 00:35:08.956466 systemd-logind[1603]: Removed session 6. Mar 13 00:35:09.077599 systemd[1]: Started sshd@6-89.167.5.55:22-4.153.228.146:35472.service - OpenSSH per-connection server daemon (4.153.228.146:35472). Mar 13 00:35:09.743832 sshd[1841]: Accepted publickey for core from 4.153.228.146 port 35472 ssh2: RSA SHA256:ihdQa0i/HnNGvKP5m9obD9eorZ8Lhhc0yafWx7ReGkQ Mar 13 00:35:09.745493 sshd-session[1841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:35:09.753263 systemd-logind[1603]: New session 7 of user core. Mar 13 00:35:09.761377 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 13 00:35:09.986573 sudo[1845]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 13 00:35:09.986925 sudo[1845]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:35:10.279131 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 13 00:35:10.292457 (dockerd)[1863]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 13 00:35:10.490998 dockerd[1863]: time="2026-03-13T00:35:10.490796448Z" level=info msg="Starting up" Mar 13 00:35:10.494454 dockerd[1863]: time="2026-03-13T00:35:10.494439298Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 13 00:35:10.511273 dockerd[1863]: time="2026-03-13T00:35:10.511228558Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 13 00:35:10.538663 systemd[1]: var-lib-docker-metacopy\x2dcheck2570818085-merged.mount: Deactivated successfully. Mar 13 00:35:10.565283 dockerd[1863]: time="2026-03-13T00:35:10.565259848Z" level=info msg="Loading containers: start." Mar 13 00:35:10.575174 kernel: Initializing XFRM netlink socket Mar 13 00:35:10.752198 systemd-timesyncd[1545]: Network configuration changed, trying to establish connection. Mar 13 00:35:10.753691 systemd-timesyncd[1545]: Network configuration changed, trying to establish connection. Mar 13 00:35:10.760183 systemd-timesyncd[1545]: Network configuration changed, trying to establish connection. Mar 13 00:35:10.788609 systemd-networkd[1503]: docker0: Link UP Mar 13 00:35:10.788854 systemd-timesyncd[1545]: Network configuration changed, trying to establish connection. Mar 13 00:35:10.791354 dockerd[1863]: time="2026-03-13T00:35:10.791324158Z" level=info msg="Loading containers: done." Mar 13 00:35:10.802282 dockerd[1863]: time="2026-03-13T00:35:10.802248118Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 13 00:35:10.802394 dockerd[1863]: time="2026-03-13T00:35:10.802306388Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 13 00:35:10.802394 dockerd[1863]: time="2026-03-13T00:35:10.802362568Z" level=info msg="Initializing buildkit" Mar 13 00:35:10.822325 dockerd[1863]: time="2026-03-13T00:35:10.822295578Z" level=info msg="Completed buildkit initialization" Mar 13 00:35:10.827547 dockerd[1863]: time="2026-03-13T00:35:10.827523148Z" level=info msg="Daemon has completed initialization" Mar 13 00:35:10.827662 dockerd[1863]: time="2026-03-13T00:35:10.827635138Z" level=info msg="API listen on /run/docker.sock" Mar 13 00:35:10.827724 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 13 00:35:11.225117 containerd[1630]: time="2026-03-13T00:35:11.224960088Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 13 00:35:11.526303 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3506548679-merged.mount: Deactivated successfully. Mar 13 00:35:11.798656 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2890325732.mount: Deactivated successfully. Mar 13 00:35:13.117083 containerd[1630]: time="2026-03-13T00:35:13.117031248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:35:13.117957 containerd[1630]: time="2026-03-13T00:35:13.117785568Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27074597" Mar 13 00:35:13.118630 containerd[1630]: time="2026-03-13T00:35:13.118608148Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:35:13.120332 containerd[1630]: time="2026-03-13T00:35:13.120315728Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:35:13.120881 containerd[1630]: time="2026-03-13T00:35:13.120859338Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 1.89584278s" Mar 13 00:35:13.120911 containerd[1630]: time="2026-03-13T00:35:13.120887158Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 13 00:35:13.121313 containerd[1630]: time="2026-03-13T00:35:13.121300998Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 13 00:35:14.167753 containerd[1630]: time="2026-03-13T00:35:14.167712108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:35:14.168900 containerd[1630]: time="2026-03-13T00:35:14.168728388Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21165845" Mar 13 00:35:14.169651 containerd[1630]: time="2026-03-13T00:35:14.169625658Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:35:14.171601 containerd[1630]: time="2026-03-13T00:35:14.171579858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:35:14.172218 containerd[1630]: time="2026-03-13T00:35:14.172197728Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 1.05087961s" Mar 13 00:35:14.172265 containerd[1630]: time="2026-03-13T00:35:14.172219738Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 13 00:35:14.172986 containerd[1630]: time="2026-03-13T00:35:14.172972538Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 13 00:35:14.292766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 13 00:35:14.295906 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:35:14.465349 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:35:14.473416 (kubelet)[2141]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:35:14.508911 kubelet[2141]: E0313 00:35:14.508822 2141 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:35:14.512309 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:35:14.512734 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:35:14.513423 systemd[1]: kubelet.service: Consumed 182ms CPU time, 109M memory peak. Mar 13 00:35:15.142111 containerd[1630]: time="2026-03-13T00:35:15.142063588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:35:15.143070 containerd[1630]: time="2026-03-13T00:35:15.142895058Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15729846" Mar 13 00:35:15.143953 containerd[1630]: time="2026-03-13T00:35:15.143934478Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:35:15.146006 containerd[1630]: time="2026-03-13T00:35:15.145985798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:35:15.146597 containerd[1630]: time="2026-03-13T00:35:15.146580478Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 973.5485ms" Mar 13 00:35:15.146645 containerd[1630]: time="2026-03-13T00:35:15.146636888Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 13 00:35:15.147223 containerd[1630]: time="2026-03-13T00:35:15.147160048Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 13 00:35:16.131858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount702857717.mount: Deactivated successfully. Mar 13 00:35:16.334247 containerd[1630]: time="2026-03-13T00:35:16.334202928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:35:16.335379 containerd[1630]: time="2026-03-13T00:35:16.335248078Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25861798" Mar 13 00:35:16.336406 containerd[1630]: time="2026-03-13T00:35:16.336361728Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:35:16.337872 containerd[1630]: time="2026-03-13T00:35:16.337851588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:35:16.338309 containerd[1630]: time="2026-03-13T00:35:16.338290388Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 1.1910173s" Mar 13 00:35:16.338370 containerd[1630]: time="2026-03-13T00:35:16.338359968Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 13 00:35:16.338978 containerd[1630]: time="2026-03-13T00:35:16.338745038Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 13 00:35:16.786385 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2292589168.mount: Deactivated successfully. Mar 13 00:35:17.826358 containerd[1630]: time="2026-03-13T00:35:17.826299138Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:35:17.827461 containerd[1630]: time="2026-03-13T00:35:17.827299348Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388101" Mar 13 00:35:17.828044 containerd[1630]: time="2026-03-13T00:35:17.828025678Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:35:17.829777 containerd[1630]: time="2026-03-13T00:35:17.829758528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:35:17.830771 containerd[1630]: time="2026-03-13T00:35:17.830395488Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.49156534s" Mar 13 00:35:17.830771 containerd[1630]: time="2026-03-13T00:35:17.830415178Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 13 00:35:17.831068 containerd[1630]: time="2026-03-13T00:35:17.831033748Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 13 00:35:18.300733 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount701031504.mount: Deactivated successfully. Mar 13 00:35:18.309197 containerd[1630]: time="2026-03-13T00:35:18.309098968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:35:18.310680 containerd[1630]: time="2026-03-13T00:35:18.310480388Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321240" Mar 13 00:35:18.311871 containerd[1630]: time="2026-03-13T00:35:18.311819228Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:35:18.315137 containerd[1630]: time="2026-03-13T00:35:18.315069478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:35:18.316363 containerd[1630]: time="2026-03-13T00:35:18.316265658Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 485.19773ms" Mar 13 00:35:18.316363 containerd[1630]: time="2026-03-13T00:35:18.316307368Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 13 00:35:18.316951 containerd[1630]: time="2026-03-13T00:35:18.316860238Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 13 00:35:18.834670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1890373255.mount: Deactivated successfully. Mar 13 00:35:19.603955 containerd[1630]: time="2026-03-13T00:35:19.603905317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:35:19.604829 containerd[1630]: time="2026-03-13T00:35:19.604647697Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860762" Mar 13 00:35:19.605466 containerd[1630]: time="2026-03-13T00:35:19.605438867Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:35:19.607220 containerd[1630]: time="2026-03-13T00:35:19.607201257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:35:19.607952 containerd[1630]: time="2026-03-13T00:35:19.607932647Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1.291025169s" Mar 13 00:35:19.607985 containerd[1630]: time="2026-03-13T00:35:19.607955097Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 13 00:35:22.062723 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:35:22.063048 systemd[1]: kubelet.service: Consumed 182ms CPU time, 109M memory peak. Mar 13 00:35:22.066841 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:35:22.098048 systemd[1]: Reload requested from client PID 2306 ('systemctl') (unit session-7.scope)... Mar 13 00:35:22.098127 systemd[1]: Reloading... Mar 13 00:35:22.214207 zram_generator::config[2353]: No configuration found. Mar 13 00:35:22.399089 systemd[1]: Reloading finished in 300 ms. Mar 13 00:35:22.447867 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 13 00:35:22.447949 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 13 00:35:22.448214 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:35:22.448366 systemd[1]: kubelet.service: Consumed 122ms CPU time, 98.3M memory peak. Mar 13 00:35:22.449942 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:35:22.628197 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:35:22.642492 (kubelet)[2402]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 13 00:35:22.673618 kubelet[2402]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 13 00:35:22.673938 kubelet[2402]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:35:22.674103 kubelet[2402]: I0313 00:35:22.674083 2402 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 00:35:22.976321 kubelet[2402]: I0313 00:35:22.976174 2402 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 13 00:35:22.976321 kubelet[2402]: I0313 00:35:22.976204 2402 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 00:35:22.977826 kubelet[2402]: I0313 00:35:22.977787 2402 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 13 00:35:22.977826 kubelet[2402]: I0313 00:35:22.977813 2402 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 13 00:35:22.978016 kubelet[2402]: I0313 00:35:22.977988 2402 server.go:956] "Client rotation is on, will bootstrap in background" Mar 13 00:35:22.985739 kubelet[2402]: I0313 00:35:22.985452 2402 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 13 00:35:22.988043 kubelet[2402]: E0313 00:35:22.987983 2402 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://89.167.5.55:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 89.167.5.55:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 13 00:35:22.991850 kubelet[2402]: I0313 00:35:22.991807 2402 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 00:35:22.994858 kubelet[2402]: I0313 00:35:22.994838 2402 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 13 00:35:22.996193 kubelet[2402]: I0313 00:35:22.996109 2402 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 00:35:22.996271 kubelet[2402]: I0313 00:35:22.996154 2402 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-4-n-a4844b4806","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 00:35:22.996271 kubelet[2402]: I0313 00:35:22.996271 2402 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 00:35:22.996515 kubelet[2402]: I0313 00:35:22.996279 2402 container_manager_linux.go:306] "Creating device plugin manager" Mar 13 00:35:22.996515 kubelet[2402]: I0313 00:35:22.996351 2402 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 13 00:35:22.998792 kubelet[2402]: I0313 00:35:22.998761 2402 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:35:22.998971 kubelet[2402]: I0313 00:35:22.998952 2402 kubelet.go:475] "Attempting to sync node with API server" Mar 13 00:35:22.998971 kubelet[2402]: I0313 00:35:22.998963 2402 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 00:35:22.999055 kubelet[2402]: I0313 00:35:22.998984 2402 kubelet.go:387] "Adding apiserver pod source" Mar 13 00:35:22.999055 kubelet[2402]: I0313 00:35:22.998998 2402 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 00:35:23.003197 kubelet[2402]: I0313 00:35:23.001601 2402 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 13 00:35:23.003197 kubelet[2402]: I0313 00:35:23.001967 2402 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 13 00:35:23.003197 kubelet[2402]: I0313 00:35:23.001983 2402 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 13 00:35:23.003197 kubelet[2402]: W0313 00:35:23.002024 2402 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 13 00:35:23.005230 kubelet[2402]: I0313 00:35:23.005204 2402 server.go:1262] "Started kubelet" Mar 13 00:35:23.005352 kubelet[2402]: E0313 00:35:23.005326 2402 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://89.167.5.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 89.167.5.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 13 00:35:23.005402 kubelet[2402]: E0313 00:35:23.005389 2402 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://89.167.5.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-4-n-a4844b4806&limit=500&resourceVersion=0\": dial tcp 89.167.5.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 13 00:35:23.007723 kubelet[2402]: I0313 00:35:23.007687 2402 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 00:35:23.008464 kubelet[2402]: I0313 00:35:23.008402 2402 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 00:35:23.008464 kubelet[2402]: I0313 00:35:23.008457 2402 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 13 00:35:23.008685 kubelet[2402]: I0313 00:35:23.008659 2402 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 00:35:23.009268 kubelet[2402]: I0313 00:35:23.009246 2402 server.go:310] "Adding debug handlers to kubelet server" Mar 13 00:35:23.012200 kubelet[2402]: I0313 00:35:23.012175 2402 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 00:35:23.012527 kubelet[2402]: E0313 00:35:23.011115 2402 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://89.167.5.55:6443/api/v1/namespaces/default/events\": dial tcp 89.167.5.55:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-4-n-a4844b4806.189c3f7487db4ae1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-4-n-a4844b4806,UID:ci-4459-2-4-n-a4844b4806,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-4-n-a4844b4806,},FirstTimestamp:2026-03-13 00:35:23.005184737 +0000 UTC m=+0.359235231,LastTimestamp:2026-03-13 00:35:23.005184737 +0000 UTC m=+0.359235231,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-4-n-a4844b4806,}" Mar 13 00:35:23.012832 kubelet[2402]: I0313 00:35:23.012807 2402 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 13 00:35:23.016237 kubelet[2402]: E0313 00:35:23.016206 2402 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 13 00:35:23.016412 kubelet[2402]: E0313 00:35:23.016385 2402 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-a4844b4806\" not found" Mar 13 00:35:23.016412 kubelet[2402]: I0313 00:35:23.016408 2402 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 13 00:35:23.016536 kubelet[2402]: I0313 00:35:23.016516 2402 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 13 00:35:23.016584 kubelet[2402]: I0313 00:35:23.016552 2402 reconciler.go:29] "Reconciler: start to sync state" Mar 13 00:35:23.017310 kubelet[2402]: I0313 00:35:23.017285 2402 factory.go:223] Registration of the systemd container factory successfully Mar 13 00:35:23.017545 kubelet[2402]: I0313 00:35:23.017520 2402 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 13 00:35:23.018017 kubelet[2402]: E0313 00:35:23.017993 2402 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://89.167.5.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 89.167.5.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 13 00:35:23.018790 kubelet[2402]: E0313 00:35:23.018758 2402 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://89.167.5.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-a4844b4806?timeout=10s\": dial tcp 89.167.5.55:6443: connect: connection refused" interval="200ms" Mar 13 00:35:23.020523 kubelet[2402]: I0313 00:35:23.020501 2402 factory.go:223] Registration of the containerd container factory successfully Mar 13 00:35:23.038253 kubelet[2402]: I0313 00:35:23.038239 2402 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 13 00:35:23.038328 kubelet[2402]: I0313 00:35:23.038322 2402 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 13 00:35:23.038361 kubelet[2402]: I0313 00:35:23.038355 2402 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:35:23.039719 kubelet[2402]: I0313 00:35:23.039708 2402 policy_none.go:49] "None policy: Start" Mar 13 00:35:23.039774 kubelet[2402]: I0313 00:35:23.039768 2402 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 13 00:35:23.039813 kubelet[2402]: I0313 00:35:23.039805 2402 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 13 00:35:23.041318 kubelet[2402]: I0313 00:35:23.041281 2402 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 13 00:35:23.042084 kubelet[2402]: I0313 00:35:23.041539 2402 policy_none.go:47] "Start" Mar 13 00:35:23.042575 kubelet[2402]: I0313 00:35:23.042548 2402 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 13 00:35:23.042575 kubelet[2402]: I0313 00:35:23.042573 2402 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 13 00:35:23.042628 kubelet[2402]: I0313 00:35:23.042594 2402 kubelet.go:2428] "Starting kubelet main sync loop" Mar 13 00:35:23.042647 kubelet[2402]: E0313 00:35:23.042628 2402 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 00:35:23.046762 kubelet[2402]: E0313 00:35:23.046744 2402 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://89.167.5.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 89.167.5.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 13 00:35:23.049940 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 13 00:35:23.058793 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 13 00:35:23.061587 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 13 00:35:23.080026 kubelet[2402]: E0313 00:35:23.080010 2402 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 13 00:35:23.080265 kubelet[2402]: I0313 00:35:23.080254 2402 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 00:35:23.080984 kubelet[2402]: I0313 00:35:23.080459 2402 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 00:35:23.081612 kubelet[2402]: I0313 00:35:23.081588 2402 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 00:35:23.082538 kubelet[2402]: E0313 00:35:23.082516 2402 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 13 00:35:23.082575 kubelet[2402]: E0313 00:35:23.082548 2402 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-2-4-n-a4844b4806\" not found" Mar 13 00:35:23.166386 systemd[1]: Created slice kubepods-burstable-pod0a08690ab476469eb4e6e03a39e0faa4.slice - libcontainer container kubepods-burstable-pod0a08690ab476469eb4e6e03a39e0faa4.slice. Mar 13 00:35:23.182974 kubelet[2402]: I0313 00:35:23.182921 2402 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-a4844b4806" Mar 13 00:35:23.183754 kubelet[2402]: E0313 00:35:23.183425 2402 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://89.167.5.55:6443/api/v1/nodes\": dial tcp 89.167.5.55:6443: connect: connection refused" node="ci-4459-2-4-n-a4844b4806" Mar 13 00:35:23.187202 kubelet[2402]: E0313 00:35:23.186986 2402 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-a4844b4806\" not found" node="ci-4459-2-4-n-a4844b4806" Mar 13 00:35:23.194517 systemd[1]: Created slice kubepods-burstable-pod8ef6b7de82c87a9317a92437f55269ef.slice - libcontainer container kubepods-burstable-pod8ef6b7de82c87a9317a92437f55269ef.slice. Mar 13 00:35:23.198886 kubelet[2402]: E0313 00:35:23.198858 2402 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-a4844b4806\" not found" node="ci-4459-2-4-n-a4844b4806" Mar 13 00:35:23.202704 systemd[1]: Created slice kubepods-burstable-podb4861e9f9f6efbadea714238d98d9709.slice - libcontainer container kubepods-burstable-podb4861e9f9f6efbadea714238d98d9709.slice. Mar 13 00:35:23.206820 kubelet[2402]: E0313 00:35:23.206760 2402 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-a4844b4806\" not found" node="ci-4459-2-4-n-a4844b4806" Mar 13 00:35:23.217160 kubelet[2402]: I0313 00:35:23.217089 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8ef6b7de82c87a9317a92437f55269ef-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-a4844b4806\" (UID: \"8ef6b7de82c87a9317a92437f55269ef\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-a4844b4806" Mar 13 00:35:23.217288 kubelet[2402]: I0313 00:35:23.217137 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8ef6b7de82c87a9317a92437f55269ef-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-4-n-a4844b4806\" (UID: \"8ef6b7de82c87a9317a92437f55269ef\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-a4844b4806" Mar 13 00:35:23.217288 kubelet[2402]: I0313 00:35:23.217199 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0a08690ab476469eb4e6e03a39e0faa4-k8s-certs\") pod \"kube-apiserver-ci-4459-2-4-n-a4844b4806\" (UID: \"0a08690ab476469eb4e6e03a39e0faa4\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-a4844b4806" Mar 13 00:35:23.217288 kubelet[2402]: I0313 00:35:23.217222 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0a08690ab476469eb4e6e03a39e0faa4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-4-n-a4844b4806\" (UID: \"0a08690ab476469eb4e6e03a39e0faa4\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-a4844b4806" Mar 13 00:35:23.217288 kubelet[2402]: I0313 00:35:23.217248 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8ef6b7de82c87a9317a92437f55269ef-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-4-n-a4844b4806\" (UID: \"8ef6b7de82c87a9317a92437f55269ef\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-a4844b4806" Mar 13 00:35:23.217288 kubelet[2402]: I0313 00:35:23.217270 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b4861e9f9f6efbadea714238d98d9709-kubeconfig\") pod \"kube-scheduler-ci-4459-2-4-n-a4844b4806\" (UID: \"b4861e9f9f6efbadea714238d98d9709\") " pod="kube-system/kube-scheduler-ci-4459-2-4-n-a4844b4806" Mar 13 00:35:23.217641 kubelet[2402]: I0313 00:35:23.217290 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0a08690ab476469eb4e6e03a39e0faa4-ca-certs\") pod \"kube-apiserver-ci-4459-2-4-n-a4844b4806\" (UID: \"0a08690ab476469eb4e6e03a39e0faa4\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-a4844b4806" Mar 13 00:35:23.217641 kubelet[2402]: I0313 00:35:23.217311 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8ef6b7de82c87a9317a92437f55269ef-ca-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-a4844b4806\" (UID: \"8ef6b7de82c87a9317a92437f55269ef\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-a4844b4806" Mar 13 00:35:23.217641 kubelet[2402]: I0313 00:35:23.217332 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8ef6b7de82c87a9317a92437f55269ef-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-4-n-a4844b4806\" (UID: \"8ef6b7de82c87a9317a92437f55269ef\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-a4844b4806" Mar 13 00:35:23.219726 kubelet[2402]: E0313 00:35:23.219670 2402 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://89.167.5.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-a4844b4806?timeout=10s\": dial tcp 89.167.5.55:6443: connect: connection refused" interval="400ms" Mar 13 00:35:23.386664 kubelet[2402]: I0313 00:35:23.386043 2402 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-a4844b4806" Mar 13 00:35:23.386664 kubelet[2402]: E0313 00:35:23.386504 2402 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://89.167.5.55:6443/api/v1/nodes\": dial tcp 89.167.5.55:6443: connect: connection refused" node="ci-4459-2-4-n-a4844b4806" Mar 13 00:35:23.490958 containerd[1630]: time="2026-03-13T00:35:23.490868207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-4-n-a4844b4806,Uid:0a08690ab476469eb4e6e03a39e0faa4,Namespace:kube-system,Attempt:0,}" Mar 13 00:35:23.501486 containerd[1630]: time="2026-03-13T00:35:23.501387347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-4-n-a4844b4806,Uid:8ef6b7de82c87a9317a92437f55269ef,Namespace:kube-system,Attempt:0,}" Mar 13 00:35:23.510736 containerd[1630]: time="2026-03-13T00:35:23.510634107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-4-n-a4844b4806,Uid:b4861e9f9f6efbadea714238d98d9709,Namespace:kube-system,Attempt:0,}" Mar 13 00:35:23.620329 kubelet[2402]: E0313 00:35:23.620269 2402 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://89.167.5.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-a4844b4806?timeout=10s\": dial tcp 89.167.5.55:6443: connect: connection refused" interval="800ms" Mar 13 00:35:23.790734 kubelet[2402]: I0313 00:35:23.790585 2402 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-a4844b4806" Mar 13 00:35:23.791828 kubelet[2402]: E0313 00:35:23.791696 2402 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://89.167.5.55:6443/api/v1/nodes\": dial tcp 89.167.5.55:6443: connect: connection refused" node="ci-4459-2-4-n-a4844b4806" Mar 13 00:35:23.882509 kubelet[2402]: E0313 00:35:23.882419 2402 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://89.167.5.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 89.167.5.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 13 00:35:23.911373 kubelet[2402]: E0313 00:35:23.911305 2402 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://89.167.5.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 89.167.5.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 13 00:35:23.957297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4160347374.mount: Deactivated successfully. Mar 13 00:35:23.968380 containerd[1630]: time="2026-03-13T00:35:23.968296937Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:35:23.969750 containerd[1630]: time="2026-03-13T00:35:23.969661187Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Mar 13 00:35:23.974179 containerd[1630]: time="2026-03-13T00:35:23.973880687Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:35:23.976564 containerd[1630]: time="2026-03-13T00:35:23.976492427Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:35:23.979183 containerd[1630]: time="2026-03-13T00:35:23.977684707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:35:23.979568 containerd[1630]: time="2026-03-13T00:35:23.979483837Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 486.69819ms" Mar 13 00:35:23.980747 containerd[1630]: time="2026-03-13T00:35:23.980673927Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:35:23.983091 containerd[1630]: time="2026-03-13T00:35:23.983001397Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 13 00:35:23.984243 containerd[1630]: time="2026-03-13T00:35:23.984189407Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 13 00:35:23.992556 containerd[1630]: time="2026-03-13T00:35:23.991536627Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 488.71215ms" Mar 13 00:35:24.010011 containerd[1630]: time="2026-03-13T00:35:24.009953857Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 497.65236ms" Mar 13 00:35:24.023085 containerd[1630]: time="2026-03-13T00:35:24.022411627Z" level=info msg="connecting to shim 33c3ecbc668c6cc2c242b6aa7f1f4b68ab4ec5c40b17780a669fc82479f18ee5" address="unix:///run/containerd/s/b260159bd06b7984706251d776cdb64031ccc907edcc86bc8d2e82bf8b539b2e" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:35:24.025400 containerd[1630]: time="2026-03-13T00:35:24.025344747Z" level=info msg="connecting to shim 3f22ae8f7a156da6cf0a2b733fa8736e2fa115813d579cc480d0c3f5a31afd14" address="unix:///run/containerd/s/3ec4b1c8782f5be72391f0e7a3a55795c0674c6000c3cccd9bdad36e4efdca02" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:35:24.050130 containerd[1630]: time="2026-03-13T00:35:24.050047827Z" level=info msg="connecting to shim 7c1f6cf2e2dbf727102ee45d3697d8ef7538d933d1a9f16bf490c5d16418919b" address="unix:///run/containerd/s/a96c2c624a18ef9936fbf8a6758508b466be5d6c34b3da3ddb6dcf77b124375f" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:35:24.052350 systemd[1]: Started cri-containerd-3f22ae8f7a156da6cf0a2b733fa8736e2fa115813d579cc480d0c3f5a31afd14.scope - libcontainer container 3f22ae8f7a156da6cf0a2b733fa8736e2fa115813d579cc480d0c3f5a31afd14. Mar 13 00:35:24.073329 systemd[1]: Started cri-containerd-33c3ecbc668c6cc2c242b6aa7f1f4b68ab4ec5c40b17780a669fc82479f18ee5.scope - libcontainer container 33c3ecbc668c6cc2c242b6aa7f1f4b68ab4ec5c40b17780a669fc82479f18ee5. Mar 13 00:35:24.079440 systemd[1]: Started cri-containerd-7c1f6cf2e2dbf727102ee45d3697d8ef7538d933d1a9f16bf490c5d16418919b.scope - libcontainer container 7c1f6cf2e2dbf727102ee45d3697d8ef7538d933d1a9f16bf490c5d16418919b. Mar 13 00:35:24.132222 containerd[1630]: time="2026-03-13T00:35:24.130451547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-4-n-a4844b4806,Uid:0a08690ab476469eb4e6e03a39e0faa4,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f22ae8f7a156da6cf0a2b733fa8736e2fa115813d579cc480d0c3f5a31afd14\"" Mar 13 00:35:24.139644 containerd[1630]: time="2026-03-13T00:35:24.139492087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-4-n-a4844b4806,Uid:8ef6b7de82c87a9317a92437f55269ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"33c3ecbc668c6cc2c242b6aa7f1f4b68ab4ec5c40b17780a669fc82479f18ee5\"" Mar 13 00:35:24.146376 containerd[1630]: time="2026-03-13T00:35:24.146245887Z" level=info msg="CreateContainer within sandbox \"3f22ae8f7a156da6cf0a2b733fa8736e2fa115813d579cc480d0c3f5a31afd14\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 13 00:35:24.147182 containerd[1630]: time="2026-03-13T00:35:24.146932227Z" level=info msg="CreateContainer within sandbox \"33c3ecbc668c6cc2c242b6aa7f1f4b68ab4ec5c40b17780a669fc82479f18ee5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 13 00:35:24.151828 containerd[1630]: time="2026-03-13T00:35:24.151806937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-4-n-a4844b4806,Uid:b4861e9f9f6efbadea714238d98d9709,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c1f6cf2e2dbf727102ee45d3697d8ef7538d933d1a9f16bf490c5d16418919b\"" Mar 13 00:35:24.155344 containerd[1630]: time="2026-03-13T00:35:24.155287017Z" level=info msg="Container d613085c61cddbfa1a8e465ac8e4be5fcfacc182aee1ae791d6b8dce41f21ae3: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:35:24.156179 containerd[1630]: time="2026-03-13T00:35:24.156133147Z" level=info msg="CreateContainer within sandbox \"7c1f6cf2e2dbf727102ee45d3697d8ef7538d933d1a9f16bf490c5d16418919b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 13 00:35:24.162386 containerd[1630]: time="2026-03-13T00:35:24.162311107Z" level=info msg="Container aa8f485e6fe5d6a688d7c966ebc7a3ac086521a9c4759dc1f102670e698ab8c3: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:35:24.166223 containerd[1630]: time="2026-03-13T00:35:24.166174987Z" level=info msg="Container 6d620b46a1434e3c52fa927497db9f27bfaa39f148644225ee20ffcd26ebb677: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:35:24.168675 containerd[1630]: time="2026-03-13T00:35:24.168639457Z" level=info msg="CreateContainer within sandbox \"3f22ae8f7a156da6cf0a2b733fa8736e2fa115813d579cc480d0c3f5a31afd14\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d613085c61cddbfa1a8e465ac8e4be5fcfacc182aee1ae791d6b8dce41f21ae3\"" Mar 13 00:35:24.169679 containerd[1630]: time="2026-03-13T00:35:24.169647927Z" level=info msg="StartContainer for \"d613085c61cddbfa1a8e465ac8e4be5fcfacc182aee1ae791d6b8dce41f21ae3\"" Mar 13 00:35:24.171011 containerd[1630]: time="2026-03-13T00:35:24.170970817Z" level=info msg="connecting to shim d613085c61cddbfa1a8e465ac8e4be5fcfacc182aee1ae791d6b8dce41f21ae3" address="unix:///run/containerd/s/3ec4b1c8782f5be72391f0e7a3a55795c0674c6000c3cccd9bdad36e4efdca02" protocol=ttrpc version=3 Mar 13 00:35:24.172262 containerd[1630]: time="2026-03-13T00:35:24.171879287Z" level=info msg="CreateContainer within sandbox \"33c3ecbc668c6cc2c242b6aa7f1f4b68ab4ec5c40b17780a669fc82479f18ee5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"aa8f485e6fe5d6a688d7c966ebc7a3ac086521a9c4759dc1f102670e698ab8c3\"" Mar 13 00:35:24.172301 containerd[1630]: time="2026-03-13T00:35:24.172267357Z" level=info msg="StartContainer for \"aa8f485e6fe5d6a688d7c966ebc7a3ac086521a9c4759dc1f102670e698ab8c3\"" Mar 13 00:35:24.172904 containerd[1630]: time="2026-03-13T00:35:24.172878247Z" level=info msg="connecting to shim aa8f485e6fe5d6a688d7c966ebc7a3ac086521a9c4759dc1f102670e698ab8c3" address="unix:///run/containerd/s/b260159bd06b7984706251d776cdb64031ccc907edcc86bc8d2e82bf8b539b2e" protocol=ttrpc version=3 Mar 13 00:35:24.177874 containerd[1630]: time="2026-03-13T00:35:24.177842787Z" level=info msg="CreateContainer within sandbox \"7c1f6cf2e2dbf727102ee45d3697d8ef7538d933d1a9f16bf490c5d16418919b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6d620b46a1434e3c52fa927497db9f27bfaa39f148644225ee20ffcd26ebb677\"" Mar 13 00:35:24.179263 containerd[1630]: time="2026-03-13T00:35:24.179235977Z" level=info msg="StartContainer for \"6d620b46a1434e3c52fa927497db9f27bfaa39f148644225ee20ffcd26ebb677\"" Mar 13 00:35:24.179968 containerd[1630]: time="2026-03-13T00:35:24.179856807Z" level=info msg="connecting to shim 6d620b46a1434e3c52fa927497db9f27bfaa39f148644225ee20ffcd26ebb677" address="unix:///run/containerd/s/a96c2c624a18ef9936fbf8a6758508b466be5d6c34b3da3ddb6dcf77b124375f" protocol=ttrpc version=3 Mar 13 00:35:24.188260 systemd[1]: Started cri-containerd-d613085c61cddbfa1a8e465ac8e4be5fcfacc182aee1ae791d6b8dce41f21ae3.scope - libcontainer container d613085c61cddbfa1a8e465ac8e4be5fcfacc182aee1ae791d6b8dce41f21ae3. Mar 13 00:35:24.204516 systemd[1]: Started cri-containerd-aa8f485e6fe5d6a688d7c966ebc7a3ac086521a9c4759dc1f102670e698ab8c3.scope - libcontainer container aa8f485e6fe5d6a688d7c966ebc7a3ac086521a9c4759dc1f102670e698ab8c3. Mar 13 00:35:24.212270 systemd[1]: Started cri-containerd-6d620b46a1434e3c52fa927497db9f27bfaa39f148644225ee20ffcd26ebb677.scope - libcontainer container 6d620b46a1434e3c52fa927497db9f27bfaa39f148644225ee20ffcd26ebb677. Mar 13 00:35:24.274616 containerd[1630]: time="2026-03-13T00:35:24.274576657Z" level=info msg="StartContainer for \"d613085c61cddbfa1a8e465ac8e4be5fcfacc182aee1ae791d6b8dce41f21ae3\" returns successfully" Mar 13 00:35:24.275225 containerd[1630]: time="2026-03-13T00:35:24.275205717Z" level=info msg="StartContainer for \"aa8f485e6fe5d6a688d7c966ebc7a3ac086521a9c4759dc1f102670e698ab8c3\" returns successfully" Mar 13 00:35:24.306068 containerd[1630]: time="2026-03-13T00:35:24.305584707Z" level=info msg="StartContainer for \"6d620b46a1434e3c52fa927497db9f27bfaa39f148644225ee20ffcd26ebb677\" returns successfully" Mar 13 00:35:24.593898 kubelet[2402]: I0313 00:35:24.593634 2402 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-a4844b4806" Mar 13 00:35:25.061652 kubelet[2402]: E0313 00:35:25.060935 2402 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-a4844b4806\" not found" node="ci-4459-2-4-n-a4844b4806" Mar 13 00:35:25.061652 kubelet[2402]: E0313 00:35:25.061279 2402 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-a4844b4806\" not found" node="ci-4459-2-4-n-a4844b4806" Mar 13 00:35:25.064490 kubelet[2402]: E0313 00:35:25.064452 2402 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-a4844b4806\" not found" node="ci-4459-2-4-n-a4844b4806" Mar 13 00:35:25.753926 kubelet[2402]: E0313 00:35:25.753873 2402 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459-2-4-n-a4844b4806\" not found" node="ci-4459-2-4-n-a4844b4806" Mar 13 00:35:25.924799 kubelet[2402]: I0313 00:35:25.924663 2402 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-4-n-a4844b4806" Mar 13 00:35:25.924799 kubelet[2402]: E0313 00:35:25.924708 2402 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4459-2-4-n-a4844b4806\": node \"ci-4459-2-4-n-a4844b4806\" not found" Mar 13 00:35:26.002511 kubelet[2402]: I0313 00:35:26.002467 2402 apiserver.go:52] "Watching apiserver" Mar 13 00:35:26.017106 kubelet[2402]: I0313 00:35:26.016790 2402 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 13 00:35:26.018994 kubelet[2402]: I0313 00:35:26.018966 2402 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-4-n-a4844b4806" Mar 13 00:35:26.028444 kubelet[2402]: E0313 00:35:26.028406 2402 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-4-n-a4844b4806\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-2-4-n-a4844b4806" Mar 13 00:35:26.028444 kubelet[2402]: I0313 00:35:26.028437 2402 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-a4844b4806" Mar 13 00:35:26.033531 kubelet[2402]: E0313 00:35:26.033501 2402 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-4-n-a4844b4806\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-a4844b4806" Mar 13 00:35:26.033531 kubelet[2402]: I0313 00:35:26.033521 2402 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-4-n-a4844b4806" Mar 13 00:35:26.037571 kubelet[2402]: E0313 00:35:26.037543 2402 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-4-n-a4844b4806\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-2-4-n-a4844b4806" Mar 13 00:35:26.064457 kubelet[2402]: I0313 00:35:26.064417 2402 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-4-n-a4844b4806" Mar 13 00:35:26.065233 kubelet[2402]: I0313 00:35:26.065213 2402 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-4-n-a4844b4806" Mar 13 00:35:26.066001 kubelet[2402]: E0313 00:35:26.065977 2402 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-4-n-a4844b4806\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-2-4-n-a4844b4806" Mar 13 00:35:26.066449 kubelet[2402]: E0313 00:35:26.066433 2402 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-4-n-a4844b4806\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-2-4-n-a4844b4806" Mar 13 00:35:27.594674 systemd[1]: Reload requested from client PID 2683 ('systemctl') (unit session-7.scope)... Mar 13 00:35:27.594701 systemd[1]: Reloading... Mar 13 00:35:27.703226 zram_generator::config[2730]: No configuration found. Mar 13 00:35:27.884926 systemd[1]: Reloading finished in 289 ms. Mar 13 00:35:27.922332 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:35:27.939380 systemd[1]: kubelet.service: Deactivated successfully. Mar 13 00:35:27.939909 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:35:27.939961 systemd[1]: kubelet.service: Consumed 733ms CPU time, 123.8M memory peak. Mar 13 00:35:27.942352 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:35:28.103655 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:35:28.109618 (kubelet)[2778]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 13 00:35:28.139574 kubelet[2778]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 13 00:35:28.140301 kubelet[2778]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:35:28.140301 kubelet[2778]: I0313 00:35:28.140123 2778 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 00:35:28.147794 kubelet[2778]: I0313 00:35:28.147756 2778 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 13 00:35:28.147794 kubelet[2778]: I0313 00:35:28.147781 2778 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 00:35:28.147938 kubelet[2778]: I0313 00:35:28.147808 2778 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 13 00:35:28.147938 kubelet[2778]: I0313 00:35:28.147823 2778 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 13 00:35:28.147997 kubelet[2778]: I0313 00:35:28.147979 2778 server.go:956] "Client rotation is on, will bootstrap in background" Mar 13 00:35:28.149264 kubelet[2778]: I0313 00:35:28.149248 2778 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 13 00:35:28.151629 kubelet[2778]: I0313 00:35:28.151366 2778 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 13 00:35:28.153746 kubelet[2778]: I0313 00:35:28.153737 2778 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 00:35:28.156884 kubelet[2778]: I0313 00:35:28.156873 2778 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 13 00:35:28.157119 kubelet[2778]: I0313 00:35:28.157093 2778 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 00:35:28.157293 kubelet[2778]: I0313 00:35:28.157190 2778 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-4-n-a4844b4806","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 00:35:28.157374 kubelet[2778]: I0313 00:35:28.157367 2778 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 00:35:28.157406 kubelet[2778]: I0313 00:35:28.157400 2778 container_manager_linux.go:306] "Creating device plugin manager" Mar 13 00:35:28.157453 kubelet[2778]: I0313 00:35:28.157448 2778 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 13 00:35:28.157648 kubelet[2778]: I0313 00:35:28.157638 2778 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:35:28.157813 kubelet[2778]: I0313 00:35:28.157806 2778 kubelet.go:475] "Attempting to sync node with API server" Mar 13 00:35:28.158363 kubelet[2778]: I0313 00:35:28.158160 2778 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 00:35:28.158363 kubelet[2778]: I0313 00:35:28.158189 2778 kubelet.go:387] "Adding apiserver pod source" Mar 13 00:35:28.158363 kubelet[2778]: I0313 00:35:28.158200 2778 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 00:35:28.165820 kubelet[2778]: I0313 00:35:28.165806 2778 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 13 00:35:28.166339 kubelet[2778]: I0313 00:35:28.166327 2778 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 13 00:35:28.168165 kubelet[2778]: I0313 00:35:28.168154 2778 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 13 00:35:28.170725 kubelet[2778]: I0313 00:35:28.170708 2778 server.go:1262] "Started kubelet" Mar 13 00:35:28.171189 kubelet[2778]: I0313 00:35:28.171166 2778 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 00:35:28.171245 kubelet[2778]: I0313 00:35:28.171198 2778 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 13 00:35:28.172502 kubelet[2778]: I0313 00:35:28.172486 2778 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 00:35:28.173600 kubelet[2778]: I0313 00:35:28.173584 2778 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 00:35:28.174542 kubelet[2778]: I0313 00:35:28.174430 2778 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 00:35:28.175264 kubelet[2778]: I0313 00:35:28.175255 2778 server.go:310] "Adding debug handlers to kubelet server" Mar 13 00:35:28.176074 kubelet[2778]: I0313 00:35:28.176063 2778 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 13 00:35:28.180700 kubelet[2778]: I0313 00:35:28.180389 2778 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 13 00:35:28.180700 kubelet[2778]: I0313 00:35:28.180455 2778 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 13 00:35:28.180700 kubelet[2778]: I0313 00:35:28.180539 2778 reconciler.go:29] "Reconciler: start to sync state" Mar 13 00:35:28.182611 kubelet[2778]: E0313 00:35:28.182500 2778 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 13 00:35:28.182923 kubelet[2778]: I0313 00:35:28.182826 2778 factory.go:223] Registration of the containerd container factory successfully Mar 13 00:35:28.183022 kubelet[2778]: I0313 00:35:28.182972 2778 factory.go:223] Registration of the systemd container factory successfully Mar 13 00:35:28.183343 kubelet[2778]: I0313 00:35:28.183288 2778 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 13 00:35:28.190858 kubelet[2778]: I0313 00:35:28.190764 2778 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 13 00:35:28.191766 kubelet[2778]: I0313 00:35:28.191746 2778 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 13 00:35:28.191766 kubelet[2778]: I0313 00:35:28.191758 2778 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 13 00:35:28.191906 kubelet[2778]: I0313 00:35:28.191775 2778 kubelet.go:2428] "Starting kubelet main sync loop" Mar 13 00:35:28.191906 kubelet[2778]: E0313 00:35:28.191805 2778 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 00:35:28.232328 kubelet[2778]: I0313 00:35:28.232299 2778 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 13 00:35:28.232581 kubelet[2778]: I0313 00:35:28.232499 2778 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 13 00:35:28.232581 kubelet[2778]: I0313 00:35:28.232520 2778 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:35:28.232747 kubelet[2778]: I0313 00:35:28.232721 2778 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 13 00:35:28.232793 kubelet[2778]: I0313 00:35:28.232779 2778 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 13 00:35:28.232839 kubelet[2778]: I0313 00:35:28.232834 2778 policy_none.go:49] "None policy: Start" Mar 13 00:35:28.232913 kubelet[2778]: I0313 00:35:28.232865 2778 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 13 00:35:28.232913 kubelet[2778]: I0313 00:35:28.232889 2778 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 13 00:35:28.233041 kubelet[2778]: I0313 00:35:28.233034 2778 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 13 00:35:28.233172 kubelet[2778]: I0313 00:35:28.233078 2778 policy_none.go:47] "Start" Mar 13 00:35:28.237352 kubelet[2778]: E0313 00:35:28.237327 2778 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 13 00:35:28.237500 kubelet[2778]: I0313 00:35:28.237484 2778 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 00:35:28.237537 kubelet[2778]: I0313 00:35:28.237500 2778 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 00:35:28.237986 kubelet[2778]: I0313 00:35:28.237919 2778 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 00:35:28.240193 kubelet[2778]: E0313 00:35:28.239880 2778 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 13 00:35:28.292885 kubelet[2778]: I0313 00:35:28.292835 2778 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-4-n-a4844b4806" Mar 13 00:35:28.293092 kubelet[2778]: I0313 00:35:28.293080 2778 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-4-n-a4844b4806" Mar 13 00:35:28.293236 kubelet[2778]: I0313 00:35:28.293123 2778 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-a4844b4806" Mar 13 00:35:28.345835 kubelet[2778]: I0313 00:35:28.345768 2778 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-a4844b4806" Mar 13 00:35:28.359087 kubelet[2778]: I0313 00:35:28.359000 2778 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459-2-4-n-a4844b4806" Mar 13 00:35:28.359581 kubelet[2778]: I0313 00:35:28.359101 2778 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-4-n-a4844b4806" Mar 13 00:35:28.381848 kubelet[2778]: I0313 00:35:28.381792 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0a08690ab476469eb4e6e03a39e0faa4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-4-n-a4844b4806\" (UID: \"0a08690ab476469eb4e6e03a39e0faa4\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-a4844b4806" Mar 13 00:35:28.381848 kubelet[2778]: I0313 00:35:28.381842 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8ef6b7de82c87a9317a92437f55269ef-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-a4844b4806\" (UID: \"8ef6b7de82c87a9317a92437f55269ef\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-a4844b4806" Mar 13 00:35:28.381848 kubelet[2778]: I0313 00:35:28.381869 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8ef6b7de82c87a9317a92437f55269ef-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-4-n-a4844b4806\" (UID: \"8ef6b7de82c87a9317a92437f55269ef\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-a4844b4806" Mar 13 00:35:28.381848 kubelet[2778]: I0313 00:35:28.381896 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b4861e9f9f6efbadea714238d98d9709-kubeconfig\") pod \"kube-scheduler-ci-4459-2-4-n-a4844b4806\" (UID: \"b4861e9f9f6efbadea714238d98d9709\") " pod="kube-system/kube-scheduler-ci-4459-2-4-n-a4844b4806" Mar 13 00:35:28.381848 kubelet[2778]: I0313 00:35:28.381921 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8ef6b7de82c87a9317a92437f55269ef-ca-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-a4844b4806\" (UID: \"8ef6b7de82c87a9317a92437f55269ef\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-a4844b4806" Mar 13 00:35:28.382662 kubelet[2778]: I0313 00:35:28.381970 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8ef6b7de82c87a9317a92437f55269ef-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-4-n-a4844b4806\" (UID: \"8ef6b7de82c87a9317a92437f55269ef\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-a4844b4806" Mar 13 00:35:28.382662 kubelet[2778]: I0313 00:35:28.381991 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8ef6b7de82c87a9317a92437f55269ef-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-4-n-a4844b4806\" (UID: \"8ef6b7de82c87a9317a92437f55269ef\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-a4844b4806" Mar 13 00:35:28.382662 kubelet[2778]: I0313 00:35:28.382046 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0a08690ab476469eb4e6e03a39e0faa4-ca-certs\") pod \"kube-apiserver-ci-4459-2-4-n-a4844b4806\" (UID: \"0a08690ab476469eb4e6e03a39e0faa4\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-a4844b4806" Mar 13 00:35:28.382662 kubelet[2778]: I0313 00:35:28.382066 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0a08690ab476469eb4e6e03a39e0faa4-k8s-certs\") pod \"kube-apiserver-ci-4459-2-4-n-a4844b4806\" (UID: \"0a08690ab476469eb4e6e03a39e0faa4\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-a4844b4806" Mar 13 00:35:28.607900 sudo[2816]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 13 00:35:28.608696 sudo[2816]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 13 00:35:28.869302 sudo[2816]: pam_unix(sudo:session): session closed for user root Mar 13 00:35:29.159849 kubelet[2778]: I0313 00:35:29.159745 2778 apiserver.go:52] "Watching apiserver" Mar 13 00:35:29.181302 kubelet[2778]: I0313 00:35:29.181223 2778 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 13 00:35:29.216016 kubelet[2778]: I0313 00:35:29.215835 2778 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-a4844b4806" Mar 13 00:35:29.221883 kubelet[2778]: E0313 00:35:29.221788 2778 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-4-n-a4844b4806\" already exists" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-a4844b4806" Mar 13 00:35:29.231421 kubelet[2778]: I0313 00:35:29.231277 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-2-4-n-a4844b4806" podStartSLOduration=1.231262327 podStartE2EDuration="1.231262327s" podCreationTimestamp="2026-03-13 00:35:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:35:29.230998917 +0000 UTC m=+1.117933571" watchObservedRunningTime="2026-03-13 00:35:29.231262327 +0000 UTC m=+1.118196971" Mar 13 00:35:29.238400 kubelet[2778]: I0313 00:35:29.238359 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-a4844b4806" podStartSLOduration=1.238347327 podStartE2EDuration="1.238347327s" podCreationTimestamp="2026-03-13 00:35:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:35:29.237705327 +0000 UTC m=+1.124639971" watchObservedRunningTime="2026-03-13 00:35:29.238347327 +0000 UTC m=+1.125281961" Mar 13 00:35:29.245373 kubelet[2778]: I0313 00:35:29.245272 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-2-4-n-a4844b4806" podStartSLOduration=1.245262317 podStartE2EDuration="1.245262317s" podCreationTimestamp="2026-03-13 00:35:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:35:29.244620857 +0000 UTC m=+1.131555511" watchObservedRunningTime="2026-03-13 00:35:29.245262317 +0000 UTC m=+1.132196951" Mar 13 00:35:30.047034 sudo[1845]: pam_unix(sudo:session): session closed for user root Mar 13 00:35:30.165360 sshd[1844]: Connection closed by 4.153.228.146 port 35472 Mar 13 00:35:30.167121 sshd-session[1841]: pam_unix(sshd:session): session closed for user core Mar 13 00:35:30.174652 systemd-logind[1603]: Session 7 logged out. Waiting for processes to exit. Mar 13 00:35:30.176170 systemd[1]: sshd@6-89.167.5.55:22-4.153.228.146:35472.service: Deactivated successfully. Mar 13 00:35:30.180371 systemd[1]: session-7.scope: Deactivated successfully. Mar 13 00:35:30.180839 systemd[1]: session-7.scope: Consumed 4.023s CPU time, 272.9M memory peak. Mar 13 00:35:30.185653 systemd-logind[1603]: Removed session 7. Mar 13 00:35:34.529312 kubelet[2778]: I0313 00:35:34.529232 2778 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 13 00:35:34.530340 containerd[1630]: time="2026-03-13T00:35:34.530288616Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 13 00:35:34.532044 kubelet[2778]: I0313 00:35:34.530659 2778 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 13 00:35:35.631328 systemd[1]: Created slice kubepods-besteffort-pod96c989ca_9a6c_4207_85df_f4c32874da45.slice - libcontainer container kubepods-besteffort-pod96c989ca_9a6c_4207_85df_f4c32874da45.slice. Mar 13 00:35:35.634990 kubelet[2778]: I0313 00:35:35.634950 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/96c989ca-9a6c-4207-85df-f4c32874da45-kube-proxy\") pod \"kube-proxy-qc5bv\" (UID: \"96c989ca-9a6c-4207-85df-f4c32874da45\") " pod="kube-system/kube-proxy-qc5bv" Mar 13 00:35:35.635536 kubelet[2778]: I0313 00:35:35.634992 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/96c989ca-9a6c-4207-85df-f4c32874da45-xtables-lock\") pod \"kube-proxy-qc5bv\" (UID: \"96c989ca-9a6c-4207-85df-f4c32874da45\") " pod="kube-system/kube-proxy-qc5bv" Mar 13 00:35:35.635536 kubelet[2778]: I0313 00:35:35.635012 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxb56\" (UniqueName: \"kubernetes.io/projected/96c989ca-9a6c-4207-85df-f4c32874da45-kube-api-access-pxb56\") pod \"kube-proxy-qc5bv\" (UID: \"96c989ca-9a6c-4207-85df-f4c32874da45\") " pod="kube-system/kube-proxy-qc5bv" Mar 13 00:35:35.635536 kubelet[2778]: I0313 00:35:35.635033 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/96c989ca-9a6c-4207-85df-f4c32874da45-lib-modules\") pod \"kube-proxy-qc5bv\" (UID: \"96c989ca-9a6c-4207-85df-f4c32874da45\") " pod="kube-system/kube-proxy-qc5bv" Mar 13 00:35:35.647085 systemd[1]: Created slice kubepods-burstable-pod1685953d_e272_42dc_bb87_e44d2fb34ca8.slice - libcontainer container kubepods-burstable-pod1685953d_e272_42dc_bb87_e44d2fb34ca8.slice. Mar 13 00:35:35.716928 systemd[1]: Created slice kubepods-besteffort-pod92930507_60a7_45ed_a90c_f0e48f25a207.slice - libcontainer container kubepods-besteffort-pod92930507_60a7_45ed_a90c_f0e48f25a207.slice. Mar 13 00:35:35.736741 kubelet[2778]: I0313 00:35:35.735267 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1685953d-e272-42dc-bb87-e44d2fb34ca8-lib-modules\") pod \"cilium-wdtsv\" (UID: \"1685953d-e272-42dc-bb87-e44d2fb34ca8\") " pod="kube-system/cilium-wdtsv" Mar 13 00:35:35.736741 kubelet[2778]: I0313 00:35:35.735300 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1685953d-e272-42dc-bb87-e44d2fb34ca8-clustermesh-secrets\") pod \"cilium-wdtsv\" (UID: \"1685953d-e272-42dc-bb87-e44d2fb34ca8\") " pod="kube-system/cilium-wdtsv" Mar 13 00:35:35.736741 kubelet[2778]: I0313 00:35:35.735314 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/92930507-60a7-45ed-a90c-f0e48f25a207-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-7m7lq\" (UID: \"92930507-60a7-45ed-a90c-f0e48f25a207\") " pod="kube-system/cilium-operator-6f9c7c5859-7m7lq" Mar 13 00:35:35.736741 kubelet[2778]: I0313 00:35:35.735325 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p66bx\" (UniqueName: \"kubernetes.io/projected/92930507-60a7-45ed-a90c-f0e48f25a207-kube-api-access-p66bx\") pod \"cilium-operator-6f9c7c5859-7m7lq\" (UID: \"92930507-60a7-45ed-a90c-f0e48f25a207\") " pod="kube-system/cilium-operator-6f9c7c5859-7m7lq" Mar 13 00:35:35.736741 kubelet[2778]: I0313 00:35:35.735365 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1685953d-e272-42dc-bb87-e44d2fb34ca8-hubble-tls\") pod \"cilium-wdtsv\" (UID: \"1685953d-e272-42dc-bb87-e44d2fb34ca8\") " pod="kube-system/cilium-wdtsv" Mar 13 00:35:35.736926 kubelet[2778]: I0313 00:35:35.735376 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1685953d-e272-42dc-bb87-e44d2fb34ca8-cilium-cgroup\") pod \"cilium-wdtsv\" (UID: \"1685953d-e272-42dc-bb87-e44d2fb34ca8\") " pod="kube-system/cilium-wdtsv" Mar 13 00:35:35.736926 kubelet[2778]: I0313 00:35:35.735386 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1685953d-e272-42dc-bb87-e44d2fb34ca8-cilium-run\") pod \"cilium-wdtsv\" (UID: \"1685953d-e272-42dc-bb87-e44d2fb34ca8\") " pod="kube-system/cilium-wdtsv" Mar 13 00:35:35.736926 kubelet[2778]: I0313 00:35:35.735409 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1685953d-e272-42dc-bb87-e44d2fb34ca8-cilium-config-path\") pod \"cilium-wdtsv\" (UID: \"1685953d-e272-42dc-bb87-e44d2fb34ca8\") " pod="kube-system/cilium-wdtsv" Mar 13 00:35:35.736926 kubelet[2778]: I0313 00:35:35.735418 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1685953d-e272-42dc-bb87-e44d2fb34ca8-host-proc-sys-net\") pod \"cilium-wdtsv\" (UID: \"1685953d-e272-42dc-bb87-e44d2fb34ca8\") " pod="kube-system/cilium-wdtsv" Mar 13 00:35:35.736926 kubelet[2778]: I0313 00:35:35.735430 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1685953d-e272-42dc-bb87-e44d2fb34ca8-host-proc-sys-kernel\") pod \"cilium-wdtsv\" (UID: \"1685953d-e272-42dc-bb87-e44d2fb34ca8\") " pod="kube-system/cilium-wdtsv" Mar 13 00:35:35.737013 kubelet[2778]: I0313 00:35:35.735441 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mccxh\" (UniqueName: \"kubernetes.io/projected/1685953d-e272-42dc-bb87-e44d2fb34ca8-kube-api-access-mccxh\") pod \"cilium-wdtsv\" (UID: \"1685953d-e272-42dc-bb87-e44d2fb34ca8\") " pod="kube-system/cilium-wdtsv" Mar 13 00:35:35.737013 kubelet[2778]: I0313 00:35:35.735455 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1685953d-e272-42dc-bb87-e44d2fb34ca8-bpf-maps\") pod \"cilium-wdtsv\" (UID: \"1685953d-e272-42dc-bb87-e44d2fb34ca8\") " pod="kube-system/cilium-wdtsv" Mar 13 00:35:35.737013 kubelet[2778]: I0313 00:35:35.735465 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1685953d-e272-42dc-bb87-e44d2fb34ca8-etc-cni-netd\") pod \"cilium-wdtsv\" (UID: \"1685953d-e272-42dc-bb87-e44d2fb34ca8\") " pod="kube-system/cilium-wdtsv" Mar 13 00:35:35.737013 kubelet[2778]: I0313 00:35:35.735474 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1685953d-e272-42dc-bb87-e44d2fb34ca8-xtables-lock\") pod \"cilium-wdtsv\" (UID: \"1685953d-e272-42dc-bb87-e44d2fb34ca8\") " pod="kube-system/cilium-wdtsv" Mar 13 00:35:35.737013 kubelet[2778]: I0313 00:35:35.735491 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1685953d-e272-42dc-bb87-e44d2fb34ca8-hostproc\") pod \"cilium-wdtsv\" (UID: \"1685953d-e272-42dc-bb87-e44d2fb34ca8\") " pod="kube-system/cilium-wdtsv" Mar 13 00:35:35.737013 kubelet[2778]: I0313 00:35:35.735501 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1685953d-e272-42dc-bb87-e44d2fb34ca8-cni-path\") pod \"cilium-wdtsv\" (UID: \"1685953d-e272-42dc-bb87-e44d2fb34ca8\") " pod="kube-system/cilium-wdtsv" Mar 13 00:35:35.943533 containerd[1630]: time="2026-03-13T00:35:35.943439986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qc5bv,Uid:96c989ca-9a6c-4207-85df-f4c32874da45,Namespace:kube-system,Attempt:0,}" Mar 13 00:35:35.954312 containerd[1630]: time="2026-03-13T00:35:35.954117206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wdtsv,Uid:1685953d-e272-42dc-bb87-e44d2fb34ca8,Namespace:kube-system,Attempt:0,}" Mar 13 00:35:35.958660 containerd[1630]: time="2026-03-13T00:35:35.958620796Z" level=info msg="connecting to shim ee299135a96dd57716d92572614baded4b8c3853f99ad577080bdbfbface898f" address="unix:///run/containerd/s/99934cb6484d079050023f55fb99278f39ba0ea98d230e526f2e6a2a6459f098" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:35:35.977531 containerd[1630]: time="2026-03-13T00:35:35.977499906Z" level=info msg="connecting to shim 10d8d7f80d483b860150838edec122c6a660ac3d6ae18559d4cb51fee92fd94c" address="unix:///run/containerd/s/1d64774b7cb29c8620c1121185b3a5df6e23abdea648303a8625d92d60b6b456" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:35:35.981266 systemd[1]: Started cri-containerd-ee299135a96dd57716d92572614baded4b8c3853f99ad577080bdbfbface898f.scope - libcontainer container ee299135a96dd57716d92572614baded4b8c3853f99ad577080bdbfbface898f. Mar 13 00:35:36.002268 systemd[1]: Started cri-containerd-10d8d7f80d483b860150838edec122c6a660ac3d6ae18559d4cb51fee92fd94c.scope - libcontainer container 10d8d7f80d483b860150838edec122c6a660ac3d6ae18559d4cb51fee92fd94c. Mar 13 00:35:36.011571 containerd[1630]: time="2026-03-13T00:35:36.011534226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qc5bv,Uid:96c989ca-9a6c-4207-85df-f4c32874da45,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee299135a96dd57716d92572614baded4b8c3853f99ad577080bdbfbface898f\"" Mar 13 00:35:36.017659 containerd[1630]: time="2026-03-13T00:35:36.017527396Z" level=info msg="CreateContainer within sandbox \"ee299135a96dd57716d92572614baded4b8c3853f99ad577080bdbfbface898f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 13 00:35:36.025975 containerd[1630]: time="2026-03-13T00:35:36.025953006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-7m7lq,Uid:92930507-60a7-45ed-a90c-f0e48f25a207,Namespace:kube-system,Attempt:0,}" Mar 13 00:35:36.029481 containerd[1630]: time="2026-03-13T00:35:36.029466066Z" level=info msg="Container f02b01a31a9b19ab5cced4996ae76e9c71f5367d8847c10ccc704cd56769e1d0: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:35:36.031937 containerd[1630]: time="2026-03-13T00:35:36.031912416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wdtsv,Uid:1685953d-e272-42dc-bb87-e44d2fb34ca8,Namespace:kube-system,Attempt:0,} returns sandbox id \"10d8d7f80d483b860150838edec122c6a660ac3d6ae18559d4cb51fee92fd94c\"" Mar 13 00:35:36.033874 containerd[1630]: time="2026-03-13T00:35:36.033801586Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 13 00:35:36.046052 containerd[1630]: time="2026-03-13T00:35:36.046017566Z" level=info msg="CreateContainer within sandbox \"ee299135a96dd57716d92572614baded4b8c3853f99ad577080bdbfbface898f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f02b01a31a9b19ab5cced4996ae76e9c71f5367d8847c10ccc704cd56769e1d0\"" Mar 13 00:35:36.046836 containerd[1630]: time="2026-03-13T00:35:36.046767716Z" level=info msg="StartContainer for \"f02b01a31a9b19ab5cced4996ae76e9c71f5367d8847c10ccc704cd56769e1d0\"" Mar 13 00:35:36.048037 containerd[1630]: time="2026-03-13T00:35:36.048016466Z" level=info msg="connecting to shim f02b01a31a9b19ab5cced4996ae76e9c71f5367d8847c10ccc704cd56769e1d0" address="unix:///run/containerd/s/99934cb6484d079050023f55fb99278f39ba0ea98d230e526f2e6a2a6459f098" protocol=ttrpc version=3 Mar 13 00:35:36.052906 containerd[1630]: time="2026-03-13T00:35:36.052881956Z" level=info msg="connecting to shim 291a1bf406283c2a8b40059f9a20576a60217981a01b8e500aa9382dd0490483" address="unix:///run/containerd/s/5f52a34a4da51c17124b67d4b15212149488377015d03d26dd606e7fff079c79" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:35:36.068305 systemd[1]: Started cri-containerd-f02b01a31a9b19ab5cced4996ae76e9c71f5367d8847c10ccc704cd56769e1d0.scope - libcontainer container f02b01a31a9b19ab5cced4996ae76e9c71f5367d8847c10ccc704cd56769e1d0. Mar 13 00:35:36.072011 systemd[1]: Started cri-containerd-291a1bf406283c2a8b40059f9a20576a60217981a01b8e500aa9382dd0490483.scope - libcontainer container 291a1bf406283c2a8b40059f9a20576a60217981a01b8e500aa9382dd0490483. Mar 13 00:35:36.119099 containerd[1630]: time="2026-03-13T00:35:36.119067256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-7m7lq,Uid:92930507-60a7-45ed-a90c-f0e48f25a207,Namespace:kube-system,Attempt:0,} returns sandbox id \"291a1bf406283c2a8b40059f9a20576a60217981a01b8e500aa9382dd0490483\"" Mar 13 00:35:36.130186 containerd[1630]: time="2026-03-13T00:35:36.130130486Z" level=info msg="StartContainer for \"f02b01a31a9b19ab5cced4996ae76e9c71f5367d8847c10ccc704cd56769e1d0\" returns successfully" Mar 13 00:35:36.240950 kubelet[2778]: I0313 00:35:36.240885 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qc5bv" podStartSLOduration=1.240872776 podStartE2EDuration="1.240872776s" podCreationTimestamp="2026-03-13 00:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:35:36.239627146 +0000 UTC m=+8.126561790" watchObservedRunningTime="2026-03-13 00:35:36.240872776 +0000 UTC m=+8.127807420" Mar 13 00:35:42.492995 systemd-resolved[1504]: Clock change detected. Flushing caches. Mar 13 00:35:42.493316 systemd-timesyncd[1545]: Contacted time server 217.197.91.176:123 (2.flatcar.pool.ntp.org). Mar 13 00:35:42.493381 systemd-timesyncd[1545]: Initial clock synchronization to Fri 2026-03-13 00:35:42.492399 UTC. Mar 13 00:35:44.214536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1762818563.mount: Deactivated successfully. Mar 13 00:35:45.622612 containerd[1630]: time="2026-03-13T00:35:45.622551021Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:35:45.623598 containerd[1630]: time="2026-03-13T00:35:45.623556871Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 13 00:35:45.624259 containerd[1630]: time="2026-03-13T00:35:45.624053291Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:35:45.625050 containerd[1630]: time="2026-03-13T00:35:45.625023321Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.149715099s" Mar 13 00:35:45.625089 containerd[1630]: time="2026-03-13T00:35:45.625051321Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 13 00:35:45.626232 containerd[1630]: time="2026-03-13T00:35:45.626206791Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 13 00:35:45.629604 containerd[1630]: time="2026-03-13T00:35:45.629577231Z" level=info msg="CreateContainer within sandbox \"10d8d7f80d483b860150838edec122c6a660ac3d6ae18559d4cb51fee92fd94c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 13 00:35:45.637963 containerd[1630]: time="2026-03-13T00:35:45.637944431Z" level=info msg="Container 0e79511dca42ada743b348d583e509659c848600b140287ed54207990b9d4902: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:35:45.640190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1755606724.mount: Deactivated successfully. Mar 13 00:35:45.647065 containerd[1630]: time="2026-03-13T00:35:45.646967191Z" level=info msg="CreateContainer within sandbox \"10d8d7f80d483b860150838edec122c6a660ac3d6ae18559d4cb51fee92fd94c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0e79511dca42ada743b348d583e509659c848600b140287ed54207990b9d4902\"" Mar 13 00:35:45.647638 containerd[1630]: time="2026-03-13T00:35:45.647592601Z" level=info msg="StartContainer for \"0e79511dca42ada743b348d583e509659c848600b140287ed54207990b9d4902\"" Mar 13 00:35:45.648397 containerd[1630]: time="2026-03-13T00:35:45.648370211Z" level=info msg="connecting to shim 0e79511dca42ada743b348d583e509659c848600b140287ed54207990b9d4902" address="unix:///run/containerd/s/1d64774b7cb29c8620c1121185b3a5df6e23abdea648303a8625d92d60b6b456" protocol=ttrpc version=3 Mar 13 00:35:45.666729 systemd[1]: Started cri-containerd-0e79511dca42ada743b348d583e509659c848600b140287ed54207990b9d4902.scope - libcontainer container 0e79511dca42ada743b348d583e509659c848600b140287ed54207990b9d4902. Mar 13 00:35:45.692498 containerd[1630]: time="2026-03-13T00:35:45.692445451Z" level=info msg="StartContainer for \"0e79511dca42ada743b348d583e509659c848600b140287ed54207990b9d4902\" returns successfully" Mar 13 00:35:45.712053 systemd[1]: cri-containerd-0e79511dca42ada743b348d583e509659c848600b140287ed54207990b9d4902.scope: Deactivated successfully. Mar 13 00:35:45.713562 containerd[1630]: time="2026-03-13T00:35:45.713512101Z" level=info msg="received container exit event container_id:\"0e79511dca42ada743b348d583e509659c848600b140287ed54207990b9d4902\" id:\"0e79511dca42ada743b348d583e509659c848600b140287ed54207990b9d4902\" pid:3200 exited_at:{seconds:1773362145 nanos:712515881}" Mar 13 00:35:45.740427 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e79511dca42ada743b348d583e509659c848600b140287ed54207990b9d4902-rootfs.mount: Deactivated successfully. Mar 13 00:35:46.711225 containerd[1630]: time="2026-03-13T00:35:46.711158631Z" level=info msg="CreateContainer within sandbox \"10d8d7f80d483b860150838edec122c6a660ac3d6ae18559d4cb51fee92fd94c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 13 00:35:46.731003 containerd[1630]: time="2026-03-13T00:35:46.728510651Z" level=info msg="Container a9ee74138e5a4c36abe239f2e3363ee24c227bd5ee0653c7248a0a3ba8977d93: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:35:46.738249 containerd[1630]: time="2026-03-13T00:35:46.738178451Z" level=info msg="CreateContainer within sandbox \"10d8d7f80d483b860150838edec122c6a660ac3d6ae18559d4cb51fee92fd94c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a9ee74138e5a4c36abe239f2e3363ee24c227bd5ee0653c7248a0a3ba8977d93\"" Mar 13 00:35:46.739741 containerd[1630]: time="2026-03-13T00:35:46.739720181Z" level=info msg="StartContainer for \"a9ee74138e5a4c36abe239f2e3363ee24c227bd5ee0653c7248a0a3ba8977d93\"" Mar 13 00:35:46.741390 containerd[1630]: time="2026-03-13T00:35:46.741168661Z" level=info msg="connecting to shim a9ee74138e5a4c36abe239f2e3363ee24c227bd5ee0653c7248a0a3ba8977d93" address="unix:///run/containerd/s/1d64774b7cb29c8620c1121185b3a5df6e23abdea648303a8625d92d60b6b456" protocol=ttrpc version=3 Mar 13 00:35:46.769925 systemd[1]: Started cri-containerd-a9ee74138e5a4c36abe239f2e3363ee24c227bd5ee0653c7248a0a3ba8977d93.scope - libcontainer container a9ee74138e5a4c36abe239f2e3363ee24c227bd5ee0653c7248a0a3ba8977d93. Mar 13 00:35:46.809707 containerd[1630]: time="2026-03-13T00:35:46.809600811Z" level=info msg="StartContainer for \"a9ee74138e5a4c36abe239f2e3363ee24c227bd5ee0653c7248a0a3ba8977d93\" returns successfully" Mar 13 00:35:46.823694 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 13 00:35:46.823948 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:35:46.824343 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 13 00:35:46.827888 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 00:35:46.829725 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 13 00:35:46.830060 systemd[1]: cri-containerd-a9ee74138e5a4c36abe239f2e3363ee24c227bd5ee0653c7248a0a3ba8977d93.scope: Deactivated successfully. Mar 13 00:35:46.832034 containerd[1630]: time="2026-03-13T00:35:46.831956641Z" level=info msg="received container exit event container_id:\"a9ee74138e5a4c36abe239f2e3363ee24c227bd5ee0653c7248a0a3ba8977d93\" id:\"a9ee74138e5a4c36abe239f2e3363ee24c227bd5ee0653c7248a0a3ba8977d93\" pid:3243 exited_at:{seconds:1773362146 nanos:830478671}" Mar 13 00:35:46.852921 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:35:47.719317 containerd[1630]: time="2026-03-13T00:35:47.718442891Z" level=info msg="CreateContainer within sandbox \"10d8d7f80d483b860150838edec122c6a660ac3d6ae18559d4cb51fee92fd94c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 13 00:35:47.725599 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a9ee74138e5a4c36abe239f2e3363ee24c227bd5ee0653c7248a0a3ba8977d93-rootfs.mount: Deactivated successfully. Mar 13 00:35:47.735723 containerd[1630]: time="2026-03-13T00:35:47.733710961Z" level=info msg="Container 3b799423154771b3aacc0f8c47e8c508eb9a70fc6f6ed077781ac5cb9a476563: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:35:47.742878 containerd[1630]: time="2026-03-13T00:35:47.742851261Z" level=info msg="CreateContainer within sandbox \"10d8d7f80d483b860150838edec122c6a660ac3d6ae18559d4cb51fee92fd94c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3b799423154771b3aacc0f8c47e8c508eb9a70fc6f6ed077781ac5cb9a476563\"" Mar 13 00:35:47.744153 containerd[1630]: time="2026-03-13T00:35:47.744130851Z" level=info msg="StartContainer for \"3b799423154771b3aacc0f8c47e8c508eb9a70fc6f6ed077781ac5cb9a476563\"" Mar 13 00:35:47.747187 containerd[1630]: time="2026-03-13T00:35:47.747168591Z" level=info msg="connecting to shim 3b799423154771b3aacc0f8c47e8c508eb9a70fc6f6ed077781ac5cb9a476563" address="unix:///run/containerd/s/1d64774b7cb29c8620c1121185b3a5df6e23abdea648303a8625d92d60b6b456" protocol=ttrpc version=3 Mar 13 00:35:47.772746 systemd[1]: Started cri-containerd-3b799423154771b3aacc0f8c47e8c508eb9a70fc6f6ed077781ac5cb9a476563.scope - libcontainer container 3b799423154771b3aacc0f8c47e8c508eb9a70fc6f6ed077781ac5cb9a476563. Mar 13 00:35:47.830360 containerd[1630]: time="2026-03-13T00:35:47.830303791Z" level=info msg="StartContainer for \"3b799423154771b3aacc0f8c47e8c508eb9a70fc6f6ed077781ac5cb9a476563\" returns successfully" Mar 13 00:35:47.836732 systemd[1]: cri-containerd-3b799423154771b3aacc0f8c47e8c508eb9a70fc6f6ed077781ac5cb9a476563.scope: Deactivated successfully. Mar 13 00:35:47.840405 containerd[1630]: time="2026-03-13T00:35:47.840373331Z" level=info msg="received container exit event container_id:\"3b799423154771b3aacc0f8c47e8c508eb9a70fc6f6ed077781ac5cb9a476563\" id:\"3b799423154771b3aacc0f8c47e8c508eb9a70fc6f6ed077781ac5cb9a476563\" pid:3303 exited_at:{seconds:1773362147 nanos:839386391}" Mar 13 00:35:47.874425 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b799423154771b3aacc0f8c47e8c508eb9a70fc6f6ed077781ac5cb9a476563-rootfs.mount: Deactivated successfully. Mar 13 00:35:48.061927 containerd[1630]: time="2026-03-13T00:35:48.061814121Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:35:48.062769 containerd[1630]: time="2026-03-13T00:35:48.062739591Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 13 00:35:48.063675 containerd[1630]: time="2026-03-13T00:35:48.063614331Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:35:48.064354 containerd[1630]: time="2026-03-13T00:35:48.064333511Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.43804415s" Mar 13 00:35:48.064387 containerd[1630]: time="2026-03-13T00:35:48.064358411Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 13 00:35:48.068306 containerd[1630]: time="2026-03-13T00:35:48.068262601Z" level=info msg="CreateContainer within sandbox \"291a1bf406283c2a8b40059f9a20576a60217981a01b8e500aa9382dd0490483\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 13 00:35:48.075620 containerd[1630]: time="2026-03-13T00:35:48.075207691Z" level=info msg="Container 9d4ead92ec6ae8667403960e90808dd27bb6be765e97b288c2dd3344089ab938: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:35:48.093850 containerd[1630]: time="2026-03-13T00:35:48.093799701Z" level=info msg="CreateContainer within sandbox \"291a1bf406283c2a8b40059f9a20576a60217981a01b8e500aa9382dd0490483\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9d4ead92ec6ae8667403960e90808dd27bb6be765e97b288c2dd3344089ab938\"" Mar 13 00:35:48.094390 containerd[1630]: time="2026-03-13T00:35:48.094345551Z" level=info msg="StartContainer for \"9d4ead92ec6ae8667403960e90808dd27bb6be765e97b288c2dd3344089ab938\"" Mar 13 00:35:48.095273 containerd[1630]: time="2026-03-13T00:35:48.095158661Z" level=info msg="connecting to shim 9d4ead92ec6ae8667403960e90808dd27bb6be765e97b288c2dd3344089ab938" address="unix:///run/containerd/s/5f52a34a4da51c17124b67d4b15212149488377015d03d26dd606e7fff079c79" protocol=ttrpc version=3 Mar 13 00:35:48.118873 systemd[1]: Started cri-containerd-9d4ead92ec6ae8667403960e90808dd27bb6be765e97b288c2dd3344089ab938.scope - libcontainer container 9d4ead92ec6ae8667403960e90808dd27bb6be765e97b288c2dd3344089ab938. Mar 13 00:35:48.146879 containerd[1630]: time="2026-03-13T00:35:48.146617501Z" level=info msg="StartContainer for \"9d4ead92ec6ae8667403960e90808dd27bb6be765e97b288c2dd3344089ab938\" returns successfully" Mar 13 00:35:48.149829 update_engine[1606]: I20260313 00:35:48.149664 1606 update_attempter.cc:509] Updating boot flags... Mar 13 00:35:48.720214 containerd[1630]: time="2026-03-13T00:35:48.720171951Z" level=info msg="CreateContainer within sandbox \"10d8d7f80d483b860150838edec122c6a660ac3d6ae18559d4cb51fee92fd94c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 13 00:35:48.735757 containerd[1630]: time="2026-03-13T00:35:48.732991801Z" level=info msg="Container d67827ad9076dc2c7eba3b9e94a4c2061b0e486d9ac6689a1856bd38e48dd743: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:35:48.740171 containerd[1630]: time="2026-03-13T00:35:48.740132501Z" level=info msg="CreateContainer within sandbox \"10d8d7f80d483b860150838edec122c6a660ac3d6ae18559d4cb51fee92fd94c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d67827ad9076dc2c7eba3b9e94a4c2061b0e486d9ac6689a1856bd38e48dd743\"" Mar 13 00:35:48.741749 containerd[1630]: time="2026-03-13T00:35:48.741729031Z" level=info msg="StartContainer for \"d67827ad9076dc2c7eba3b9e94a4c2061b0e486d9ac6689a1856bd38e48dd743\"" Mar 13 00:35:48.742321 containerd[1630]: time="2026-03-13T00:35:48.742299081Z" level=info msg="connecting to shim d67827ad9076dc2c7eba3b9e94a4c2061b0e486d9ac6689a1856bd38e48dd743" address="unix:///run/containerd/s/1d64774b7cb29c8620c1121185b3a5df6e23abdea648303a8625d92d60b6b456" protocol=ttrpc version=3 Mar 13 00:35:48.765708 systemd[1]: Started cri-containerd-d67827ad9076dc2c7eba3b9e94a4c2061b0e486d9ac6689a1856bd38e48dd743.scope - libcontainer container d67827ad9076dc2c7eba3b9e94a4c2061b0e486d9ac6689a1856bd38e48dd743. Mar 13 00:35:48.815928 kubelet[2778]: I0313 00:35:48.815878 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-7m7lq" podStartSLOduration=3.312702602 podStartE2EDuration="13.815862261s" podCreationTimestamp="2026-03-13 00:35:35 +0000 UTC" firstStartedPulling="2026-03-13 00:35:36.120473646 +0000 UTC m=+8.007408290" lastFinishedPulling="2026-03-13 00:35:48.065117041 +0000 UTC m=+18.510567949" observedRunningTime="2026-03-13 00:35:48.814058721 +0000 UTC m=+19.259509629" watchObservedRunningTime="2026-03-13 00:35:48.815862261 +0000 UTC m=+19.261313159" Mar 13 00:35:48.823794 containerd[1630]: time="2026-03-13T00:35:48.823752891Z" level=info msg="StartContainer for \"d67827ad9076dc2c7eba3b9e94a4c2061b0e486d9ac6689a1856bd38e48dd743\" returns successfully" Mar 13 00:35:48.826002 systemd[1]: cri-containerd-d67827ad9076dc2c7eba3b9e94a4c2061b0e486d9ac6689a1856bd38e48dd743.scope: Deactivated successfully. Mar 13 00:35:48.827423 containerd[1630]: time="2026-03-13T00:35:48.827240641Z" level=info msg="received container exit event container_id:\"d67827ad9076dc2c7eba3b9e94a4c2061b0e486d9ac6689a1856bd38e48dd743\" id:\"d67827ad9076dc2c7eba3b9e94a4c2061b0e486d9ac6689a1856bd38e48dd743\" pid:3399 exited_at:{seconds:1773362148 nanos:826555301}" Mar 13 00:35:48.848793 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d67827ad9076dc2c7eba3b9e94a4c2061b0e486d9ac6689a1856bd38e48dd743-rootfs.mount: Deactivated successfully. Mar 13 00:35:49.738611 containerd[1630]: time="2026-03-13T00:35:49.737427711Z" level=info msg="CreateContainer within sandbox \"10d8d7f80d483b860150838edec122c6a660ac3d6ae18559d4cb51fee92fd94c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 13 00:35:49.766329 containerd[1630]: time="2026-03-13T00:35:49.766042671Z" level=info msg="Container 96df9e8fa6bacf63ee7e3b9e384a652f99eef7bf72e5ff81dd934860e9eb5cf2: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:35:49.773498 containerd[1630]: time="2026-03-13T00:35:49.773475431Z" level=info msg="CreateContainer within sandbox \"10d8d7f80d483b860150838edec122c6a660ac3d6ae18559d4cb51fee92fd94c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"96df9e8fa6bacf63ee7e3b9e384a652f99eef7bf72e5ff81dd934860e9eb5cf2\"" Mar 13 00:35:49.773955 containerd[1630]: time="2026-03-13T00:35:49.773935401Z" level=info msg="StartContainer for \"96df9e8fa6bacf63ee7e3b9e384a652f99eef7bf72e5ff81dd934860e9eb5cf2\"" Mar 13 00:35:49.774615 containerd[1630]: time="2026-03-13T00:35:49.774593951Z" level=info msg="connecting to shim 96df9e8fa6bacf63ee7e3b9e384a652f99eef7bf72e5ff81dd934860e9eb5cf2" address="unix:///run/containerd/s/1d64774b7cb29c8620c1121185b3a5df6e23abdea648303a8625d92d60b6b456" protocol=ttrpc version=3 Mar 13 00:35:49.804748 systemd[1]: Started cri-containerd-96df9e8fa6bacf63ee7e3b9e384a652f99eef7bf72e5ff81dd934860e9eb5cf2.scope - libcontainer container 96df9e8fa6bacf63ee7e3b9e384a652f99eef7bf72e5ff81dd934860e9eb5cf2. Mar 13 00:35:49.849062 containerd[1630]: time="2026-03-13T00:35:49.849031701Z" level=info msg="StartContainer for \"96df9e8fa6bacf63ee7e3b9e384a652f99eef7bf72e5ff81dd934860e9eb5cf2\" returns successfully" Mar 13 00:35:49.939012 kubelet[2778]: I0313 00:35:49.938934 2778 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 13 00:35:49.974053 systemd[1]: Created slice kubepods-burstable-pod7444f98b_198c_4306_8d1d_96c53483afe0.slice - libcontainer container kubepods-burstable-pod7444f98b_198c_4306_8d1d_96c53483afe0.slice. Mar 13 00:35:49.982643 systemd[1]: Created slice kubepods-burstable-pod918c86de_2a41_41d4_8a08_92591a63639c.slice - libcontainer container kubepods-burstable-pod918c86de_2a41_41d4_8a08_92591a63639c.slice. Mar 13 00:35:50.080803 kubelet[2778]: I0313 00:35:50.080689 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7444f98b-198c-4306-8d1d-96c53483afe0-config-volume\") pod \"coredns-66bc5c9577-lqk8d\" (UID: \"7444f98b-198c-4306-8d1d-96c53483afe0\") " pod="kube-system/coredns-66bc5c9577-lqk8d" Mar 13 00:35:50.080803 kubelet[2778]: I0313 00:35:50.080782 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/918c86de-2a41-41d4-8a08-92591a63639c-config-volume\") pod \"coredns-66bc5c9577-kvmrk\" (UID: \"918c86de-2a41-41d4-8a08-92591a63639c\") " pod="kube-system/coredns-66bc5c9577-kvmrk" Mar 13 00:35:50.080803 kubelet[2778]: I0313 00:35:50.080795 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t8v7\" (UniqueName: \"kubernetes.io/projected/7444f98b-198c-4306-8d1d-96c53483afe0-kube-api-access-9t8v7\") pod \"coredns-66bc5c9577-lqk8d\" (UID: \"7444f98b-198c-4306-8d1d-96c53483afe0\") " pod="kube-system/coredns-66bc5c9577-lqk8d" Mar 13 00:35:50.080995 kubelet[2778]: I0313 00:35:50.080810 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdk27\" (UniqueName: \"kubernetes.io/projected/918c86de-2a41-41d4-8a08-92591a63639c-kube-api-access-tdk27\") pod \"coredns-66bc5c9577-kvmrk\" (UID: \"918c86de-2a41-41d4-8a08-92591a63639c\") " pod="kube-system/coredns-66bc5c9577-kvmrk" Mar 13 00:35:50.289349 containerd[1630]: time="2026-03-13T00:35:50.288952331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lqk8d,Uid:7444f98b-198c-4306-8d1d-96c53483afe0,Namespace:kube-system,Attempt:0,}" Mar 13 00:35:50.292314 containerd[1630]: time="2026-03-13T00:35:50.291248201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kvmrk,Uid:918c86de-2a41-41d4-8a08-92591a63639c,Namespace:kube-system,Attempt:0,}" Mar 13 00:35:52.060319 systemd-networkd[1503]: cilium_host: Link UP Mar 13 00:35:52.062369 systemd-networkd[1503]: cilium_net: Link UP Mar 13 00:35:52.063147 systemd-networkd[1503]: cilium_net: Gained carrier Mar 13 00:35:52.063735 systemd-networkd[1503]: cilium_host: Gained carrier Mar 13 00:35:52.166704 systemd-networkd[1503]: cilium_vxlan: Link UP Mar 13 00:35:52.166712 systemd-networkd[1503]: cilium_vxlan: Gained carrier Mar 13 00:35:52.329661 kernel: NET: Registered PF_ALG protocol family Mar 13 00:35:52.662739 systemd-networkd[1503]: cilium_net: Gained IPv6LL Mar 13 00:35:52.845217 systemd-networkd[1503]: lxc_health: Link UP Mar 13 00:35:52.845471 systemd-networkd[1503]: lxc_health: Gained carrier Mar 13 00:35:52.918411 systemd-networkd[1503]: cilium_host: Gained IPv6LL Mar 13 00:35:53.333620 systemd-networkd[1503]: lxc4fa4332ec5a3: Link UP Mar 13 00:35:53.341029 kernel: eth0: renamed from tmpd403f Mar 13 00:35:53.346127 systemd-networkd[1503]: lxc4fa4332ec5a3: Gained carrier Mar 13 00:35:53.347670 systemd-networkd[1503]: lxcc47c88ee9422: Link UP Mar 13 00:35:53.352647 kernel: eth0: renamed from tmp6a40d Mar 13 00:35:53.357348 systemd-networkd[1503]: lxcc47c88ee9422: Gained carrier Mar 13 00:35:53.412908 kubelet[2778]: I0313 00:35:53.412858 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wdtsv" podStartSLOduration=10.261661681 podStartE2EDuration="18.41284527s" podCreationTimestamp="2026-03-13 00:35:35 +0000 UTC" firstStartedPulling="2026-03-13 00:35:36.033062296 +0000 UTC m=+7.919996940" lastFinishedPulling="2026-03-13 00:35:45.625729631 +0000 UTC m=+16.071180529" observedRunningTime="2026-03-13 00:35:50.767136071 +0000 UTC m=+21.212587009" watchObservedRunningTime="2026-03-13 00:35:53.41284527 +0000 UTC m=+23.858296178" Mar 13 00:35:53.685960 systemd-networkd[1503]: cilium_vxlan: Gained IPv6LL Mar 13 00:35:54.517962 systemd-networkd[1503]: lxc_health: Gained IPv6LL Mar 13 00:35:54.582703 systemd-networkd[1503]: lxc4fa4332ec5a3: Gained IPv6LL Mar 13 00:35:54.965785 systemd-networkd[1503]: lxcc47c88ee9422: Gained IPv6LL Mar 13 00:35:55.823744 containerd[1630]: time="2026-03-13T00:35:55.823198560Z" level=info msg="connecting to shim 6a40d701c38c986ecab6606de282a21f19cfd03c89d814d308c6f47c244fdd0a" address="unix:///run/containerd/s/8e4d3d6daf4907f5bdc2b3d492c8031ade4c93d57c5ba22a908d1bceec6c7806" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:35:55.835425 containerd[1630]: time="2026-03-13T00:35:55.834980270Z" level=info msg="connecting to shim d403f3685198951699e8351da1cee35bb30d43e154ad603bcdf0079db5af6125" address="unix:///run/containerd/s/f02bcf21841c5b423079370b34e7d77fd8d95113ec81141047e6af50e2bb1ce6" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:35:55.854781 systemd[1]: Started cri-containerd-6a40d701c38c986ecab6606de282a21f19cfd03c89d814d308c6f47c244fdd0a.scope - libcontainer container 6a40d701c38c986ecab6606de282a21f19cfd03c89d814d308c6f47c244fdd0a. Mar 13 00:35:55.872762 systemd[1]: Started cri-containerd-d403f3685198951699e8351da1cee35bb30d43e154ad603bcdf0079db5af6125.scope - libcontainer container d403f3685198951699e8351da1cee35bb30d43e154ad603bcdf0079db5af6125. Mar 13 00:35:55.927252 containerd[1630]: time="2026-03-13T00:35:55.927175430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lqk8d,Uid:7444f98b-198c-4306-8d1d-96c53483afe0,Namespace:kube-system,Attempt:0,} returns sandbox id \"d403f3685198951699e8351da1cee35bb30d43e154ad603bcdf0079db5af6125\"" Mar 13 00:35:55.938415 containerd[1630]: time="2026-03-13T00:35:55.937846440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kvmrk,Uid:918c86de-2a41-41d4-8a08-92591a63639c,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a40d701c38c986ecab6606de282a21f19cfd03c89d814d308c6f47c244fdd0a\"" Mar 13 00:35:55.938740 containerd[1630]: time="2026-03-13T00:35:55.938728920Z" level=info msg="CreateContainer within sandbox \"d403f3685198951699e8351da1cee35bb30d43e154ad603bcdf0079db5af6125\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 13 00:35:55.947007 containerd[1630]: time="2026-03-13T00:35:55.946779040Z" level=info msg="CreateContainer within sandbox \"6a40d701c38c986ecab6606de282a21f19cfd03c89d814d308c6f47c244fdd0a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 13 00:35:55.958545 containerd[1630]: time="2026-03-13T00:35:55.958519080Z" level=info msg="Container e52593d0e4ef9b2cd1a3d013aa96dcbda0a1e483ebc52a74aa7c85f20bbd2795: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:35:55.960478 containerd[1630]: time="2026-03-13T00:35:55.960454490Z" level=info msg="Container 8eec86179f574784d814a446401dce5b8155bcc92fac61bb4dfb8d3b73ce184d: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:35:55.963942 containerd[1630]: time="2026-03-13T00:35:55.963917090Z" level=info msg="CreateContainer within sandbox \"d403f3685198951699e8351da1cee35bb30d43e154ad603bcdf0079db5af6125\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e52593d0e4ef9b2cd1a3d013aa96dcbda0a1e483ebc52a74aa7c85f20bbd2795\"" Mar 13 00:35:55.964610 containerd[1630]: time="2026-03-13T00:35:55.964591290Z" level=info msg="StartContainer for \"e52593d0e4ef9b2cd1a3d013aa96dcbda0a1e483ebc52a74aa7c85f20bbd2795\"" Mar 13 00:35:55.965450 containerd[1630]: time="2026-03-13T00:35:55.965405220Z" level=info msg="connecting to shim e52593d0e4ef9b2cd1a3d013aa96dcbda0a1e483ebc52a74aa7c85f20bbd2795" address="unix:///run/containerd/s/f02bcf21841c5b423079370b34e7d77fd8d95113ec81141047e6af50e2bb1ce6" protocol=ttrpc version=3 Mar 13 00:35:55.967225 containerd[1630]: time="2026-03-13T00:35:55.967150660Z" level=info msg="CreateContainer within sandbox \"6a40d701c38c986ecab6606de282a21f19cfd03c89d814d308c6f47c244fdd0a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8eec86179f574784d814a446401dce5b8155bcc92fac61bb4dfb8d3b73ce184d\"" Mar 13 00:35:55.968055 containerd[1630]: time="2026-03-13T00:35:55.968033550Z" level=info msg="StartContainer for \"8eec86179f574784d814a446401dce5b8155bcc92fac61bb4dfb8d3b73ce184d\"" Mar 13 00:35:55.969496 containerd[1630]: time="2026-03-13T00:35:55.969451640Z" level=info msg="connecting to shim 8eec86179f574784d814a446401dce5b8155bcc92fac61bb4dfb8d3b73ce184d" address="unix:///run/containerd/s/8e4d3d6daf4907f5bdc2b3d492c8031ade4c93d57c5ba22a908d1bceec6c7806" protocol=ttrpc version=3 Mar 13 00:35:55.983737 systemd[1]: Started cri-containerd-e52593d0e4ef9b2cd1a3d013aa96dcbda0a1e483ebc52a74aa7c85f20bbd2795.scope - libcontainer container e52593d0e4ef9b2cd1a3d013aa96dcbda0a1e483ebc52a74aa7c85f20bbd2795. Mar 13 00:35:55.986471 systemd[1]: Started cri-containerd-8eec86179f574784d814a446401dce5b8155bcc92fac61bb4dfb8d3b73ce184d.scope - libcontainer container 8eec86179f574784d814a446401dce5b8155bcc92fac61bb4dfb8d3b73ce184d. Mar 13 00:35:56.023226 containerd[1630]: time="2026-03-13T00:35:56.023181030Z" level=info msg="StartContainer for \"e52593d0e4ef9b2cd1a3d013aa96dcbda0a1e483ebc52a74aa7c85f20bbd2795\" returns successfully" Mar 13 00:35:56.023616 containerd[1630]: time="2026-03-13T00:35:56.023594690Z" level=info msg="StartContainer for \"8eec86179f574784d814a446401dce5b8155bcc92fac61bb4dfb8d3b73ce184d\" returns successfully" Mar 13 00:35:56.781548 kubelet[2778]: I0313 00:35:56.781229 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-lqk8d" podStartSLOduration=21.78121019 podStartE2EDuration="21.78121019s" podCreationTimestamp="2026-03-13 00:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:35:56.77689771 +0000 UTC m=+27.222348648" watchObservedRunningTime="2026-03-13 00:35:56.78121019 +0000 UTC m=+27.226661138" Mar 13 00:35:56.810103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3530566164.mount: Deactivated successfully. Mar 13 00:35:56.826169 kubelet[2778]: I0313 00:35:56.826053 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-kvmrk" podStartSLOduration=21.82604027 podStartE2EDuration="21.82604027s" podCreationTimestamp="2026-03-13 00:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:35:56.82507803 +0000 UTC m=+27.270528928" watchObservedRunningTime="2026-03-13 00:35:56.82604027 +0000 UTC m=+27.271491168" Mar 13 00:37:04.337251 systemd[1]: Started sshd@7-89.167.5.55:22-4.153.228.146:56142.service - OpenSSH per-connection server daemon (4.153.228.146:56142). Mar 13 00:37:04.990912 sshd[4108]: Accepted publickey for core from 4.153.228.146 port 56142 ssh2: RSA SHA256:ihdQa0i/HnNGvKP5m9obD9eorZ8Lhhc0yafWx7ReGkQ Mar 13 00:37:04.993032 sshd-session[4108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:37:04.999538 systemd-logind[1603]: New session 8 of user core. Mar 13 00:37:05.011815 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 13 00:37:05.445673 sshd[4111]: Connection closed by 4.153.228.146 port 56142 Mar 13 00:37:05.446785 sshd-session[4108]: pam_unix(sshd:session): session closed for user core Mar 13 00:37:05.454478 systemd[1]: sshd@7-89.167.5.55:22-4.153.228.146:56142.service: Deactivated successfully. Mar 13 00:37:05.458871 systemd[1]: session-8.scope: Deactivated successfully. Mar 13 00:37:05.461826 systemd-logind[1603]: Session 8 logged out. Waiting for processes to exit. Mar 13 00:37:05.464870 systemd-logind[1603]: Removed session 8. Mar 13 00:37:10.590418 systemd[1]: Started sshd@8-89.167.5.55:22-4.153.228.146:42306.service - OpenSSH per-connection server daemon (4.153.228.146:42306). Mar 13 00:37:11.231324 sshd[4126]: Accepted publickey for core from 4.153.228.146 port 42306 ssh2: RSA SHA256:ihdQa0i/HnNGvKP5m9obD9eorZ8Lhhc0yafWx7ReGkQ Mar 13 00:37:11.233439 sshd-session[4126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:37:11.240760 systemd-logind[1603]: New session 9 of user core. Mar 13 00:37:11.245780 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 13 00:37:11.648957 sshd[4129]: Connection closed by 4.153.228.146 port 42306 Mar 13 00:37:11.650050 sshd-session[4126]: pam_unix(sshd:session): session closed for user core Mar 13 00:37:11.657557 systemd[1]: sshd@8-89.167.5.55:22-4.153.228.146:42306.service: Deactivated successfully. Mar 13 00:37:11.661411 systemd[1]: session-9.scope: Deactivated successfully. Mar 13 00:37:11.663393 systemd-logind[1603]: Session 9 logged out. Waiting for processes to exit. Mar 13 00:37:11.666180 systemd-logind[1603]: Removed session 9. Mar 13 00:37:16.781008 systemd[1]: Started sshd@9-89.167.5.55:22-4.153.228.146:42308.service - OpenSSH per-connection server daemon (4.153.228.146:42308). Mar 13 00:37:17.440688 sshd[4142]: Accepted publickey for core from 4.153.228.146 port 42308 ssh2: RSA SHA256:ihdQa0i/HnNGvKP5m9obD9eorZ8Lhhc0yafWx7ReGkQ Mar 13 00:37:17.442480 sshd-session[4142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:37:17.452031 systemd-logind[1603]: New session 10 of user core. Mar 13 00:37:17.460941 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 13 00:37:17.903607 sshd[4145]: Connection closed by 4.153.228.146 port 42308 Mar 13 00:37:17.906009 sshd-session[4142]: pam_unix(sshd:session): session closed for user core Mar 13 00:37:17.912389 systemd[1]: sshd@9-89.167.5.55:22-4.153.228.146:42308.service: Deactivated successfully. Mar 13 00:37:17.916578 systemd[1]: session-10.scope: Deactivated successfully. Mar 13 00:37:17.922384 systemd-logind[1603]: Session 10 logged out. Waiting for processes to exit. Mar 13 00:37:17.924708 systemd-logind[1603]: Removed session 10. Mar 13 00:37:18.039171 systemd[1]: Started sshd@10-89.167.5.55:22-4.153.228.146:42314.service - OpenSSH per-connection server daemon (4.153.228.146:42314). Mar 13 00:37:18.703395 sshd[4158]: Accepted publickey for core from 4.153.228.146 port 42314 ssh2: RSA SHA256:ihdQa0i/HnNGvKP5m9obD9eorZ8Lhhc0yafWx7ReGkQ Mar 13 00:37:18.704831 sshd-session[4158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:37:18.712733 systemd-logind[1603]: New session 11 of user core. Mar 13 00:37:18.721897 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 13 00:37:19.197820 sshd[4161]: Connection closed by 4.153.228.146 port 42314 Mar 13 00:37:19.198577 sshd-session[4158]: pam_unix(sshd:session): session closed for user core Mar 13 00:37:19.206079 systemd[1]: sshd@10-89.167.5.55:22-4.153.228.146:42314.service: Deactivated successfully. Mar 13 00:37:19.210287 systemd[1]: session-11.scope: Deactivated successfully. Mar 13 00:37:19.211898 systemd-logind[1603]: Session 11 logged out. Waiting for processes to exit. Mar 13 00:37:19.215473 systemd-logind[1603]: Removed session 11. Mar 13 00:37:19.335904 systemd[1]: Started sshd@11-89.167.5.55:22-4.153.228.146:34084.service - OpenSSH per-connection server daemon (4.153.228.146:34084). Mar 13 00:37:19.983086 sshd[4171]: Accepted publickey for core from 4.153.228.146 port 34084 ssh2: RSA SHA256:ihdQa0i/HnNGvKP5m9obD9eorZ8Lhhc0yafWx7ReGkQ Mar 13 00:37:19.985672 sshd-session[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:37:19.994423 systemd-logind[1603]: New session 12 of user core. Mar 13 00:37:20.000885 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 13 00:37:20.432203 sshd[4174]: Connection closed by 4.153.228.146 port 34084 Mar 13 00:37:20.433849 sshd-session[4171]: pam_unix(sshd:session): session closed for user core Mar 13 00:37:20.438418 systemd-logind[1603]: Session 12 logged out. Waiting for processes to exit. Mar 13 00:37:20.439358 systemd[1]: sshd@11-89.167.5.55:22-4.153.228.146:34084.service: Deactivated successfully. Mar 13 00:37:20.441487 systemd[1]: session-12.scope: Deactivated successfully. Mar 13 00:37:20.443508 systemd-logind[1603]: Removed session 12. Mar 13 00:37:25.566841 systemd[1]: Started sshd@12-89.167.5.55:22-4.153.228.146:34090.service - OpenSSH per-connection server daemon (4.153.228.146:34090). Mar 13 00:37:26.225010 sshd[4187]: Accepted publickey for core from 4.153.228.146 port 34090 ssh2: RSA SHA256:ihdQa0i/HnNGvKP5m9obD9eorZ8Lhhc0yafWx7ReGkQ Mar 13 00:37:26.226994 sshd-session[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:37:26.232882 systemd-logind[1603]: New session 13 of user core. Mar 13 00:37:26.239887 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 13 00:37:26.652666 sshd[4190]: Connection closed by 4.153.228.146 port 34090 Mar 13 00:37:26.654035 sshd-session[4187]: pam_unix(sshd:session): session closed for user core Mar 13 00:37:26.660934 systemd-logind[1603]: Session 13 logged out. Waiting for processes to exit. Mar 13 00:37:26.662093 systemd[1]: sshd@12-89.167.5.55:22-4.153.228.146:34090.service: Deactivated successfully. Mar 13 00:37:26.666352 systemd[1]: session-13.scope: Deactivated successfully. Mar 13 00:37:26.670181 systemd-logind[1603]: Removed session 13. Mar 13 00:37:31.792522 systemd[1]: Started sshd@13-89.167.5.55:22-4.153.228.146:37894.service - OpenSSH per-connection server daemon (4.153.228.146:37894). Mar 13 00:37:32.462157 sshd[4204]: Accepted publickey for core from 4.153.228.146 port 37894 ssh2: RSA SHA256:ihdQa0i/HnNGvKP5m9obD9eorZ8Lhhc0yafWx7ReGkQ Mar 13 00:37:32.465189 sshd-session[4204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:37:32.472734 systemd-logind[1603]: New session 14 of user core. Mar 13 00:37:32.481875 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 13 00:37:32.907707 sshd[4207]: Connection closed by 4.153.228.146 port 37894 Mar 13 00:37:32.908875 sshd-session[4204]: pam_unix(sshd:session): session closed for user core Mar 13 00:37:32.913663 systemd[1]: sshd@13-89.167.5.55:22-4.153.228.146:37894.service: Deactivated successfully. Mar 13 00:37:32.916124 systemd[1]: session-14.scope: Deactivated successfully. Mar 13 00:37:32.917972 systemd-logind[1603]: Session 14 logged out. Waiting for processes to exit. Mar 13 00:37:32.919866 systemd-logind[1603]: Removed session 14. Mar 13 00:37:33.044171 systemd[1]: Started sshd@14-89.167.5.55:22-4.153.228.146:37900.service - OpenSSH per-connection server daemon (4.153.228.146:37900). Mar 13 00:37:33.681736 sshd[4219]: Accepted publickey for core from 4.153.228.146 port 37900 ssh2: RSA SHA256:ihdQa0i/HnNGvKP5m9obD9eorZ8Lhhc0yafWx7ReGkQ Mar 13 00:37:33.689345 sshd-session[4219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:37:33.698258 systemd-logind[1603]: New session 15 of user core. Mar 13 00:37:33.702947 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 13 00:37:34.126295 sshd[4222]: Connection closed by 4.153.228.146 port 37900 Mar 13 00:37:34.127838 sshd-session[4219]: pam_unix(sshd:session): session closed for user core Mar 13 00:37:34.131927 systemd[1]: sshd@14-89.167.5.55:22-4.153.228.146:37900.service: Deactivated successfully. Mar 13 00:37:34.134684 systemd[1]: session-15.scope: Deactivated successfully. Mar 13 00:37:34.136926 systemd-logind[1603]: Session 15 logged out. Waiting for processes to exit. Mar 13 00:37:34.138617 systemd-logind[1603]: Removed session 15. Mar 13 00:37:34.257436 systemd[1]: Started sshd@15-89.167.5.55:22-4.153.228.146:37906.service - OpenSSH per-connection server daemon (4.153.228.146:37906). Mar 13 00:37:34.898675 sshd[4231]: Accepted publickey for core from 4.153.228.146 port 37906 ssh2: RSA SHA256:ihdQa0i/HnNGvKP5m9obD9eorZ8Lhhc0yafWx7ReGkQ Mar 13 00:37:34.901182 sshd-session[4231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:37:34.907239 systemd-logind[1603]: New session 16 of user core. Mar 13 00:37:34.912830 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 13 00:37:35.673038 sshd[4234]: Connection closed by 4.153.228.146 port 37906 Mar 13 00:37:35.673991 sshd-session[4231]: pam_unix(sshd:session): session closed for user core Mar 13 00:37:35.680830 systemd[1]: sshd@15-89.167.5.55:22-4.153.228.146:37906.service: Deactivated successfully. Mar 13 00:37:35.687273 systemd[1]: session-16.scope: Deactivated successfully. Mar 13 00:37:35.691597 systemd-logind[1603]: Session 16 logged out. Waiting for processes to exit. Mar 13 00:37:35.697349 systemd-logind[1603]: Removed session 16. Mar 13 00:37:35.814026 systemd[1]: Started sshd@16-89.167.5.55:22-4.153.228.146:37910.service - OpenSSH per-connection server daemon (4.153.228.146:37910). Mar 13 00:37:36.484670 sshd[4249]: Accepted publickey for core from 4.153.228.146 port 37910 ssh2: RSA SHA256:ihdQa0i/HnNGvKP5m9obD9eorZ8Lhhc0yafWx7ReGkQ Mar 13 00:37:36.486709 sshd-session[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:37:36.492666 systemd-logind[1603]: New session 17 of user core. Mar 13 00:37:36.494866 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 13 00:37:36.982320 sshd[4252]: Connection closed by 4.153.228.146 port 37910 Mar 13 00:37:36.985033 sshd-session[4249]: pam_unix(sshd:session): session closed for user core Mar 13 00:37:36.992669 systemd-logind[1603]: Session 17 logged out. Waiting for processes to exit. Mar 13 00:37:36.993304 systemd[1]: sshd@16-89.167.5.55:22-4.153.228.146:37910.service: Deactivated successfully. Mar 13 00:37:36.998581 systemd[1]: session-17.scope: Deactivated successfully. Mar 13 00:37:37.002396 systemd-logind[1603]: Removed session 17. Mar 13 00:37:37.121308 systemd[1]: Started sshd@17-89.167.5.55:22-4.153.228.146:37912.service - OpenSSH per-connection server daemon (4.153.228.146:37912). Mar 13 00:37:37.778664 sshd[4264]: Accepted publickey for core from 4.153.228.146 port 37912 ssh2: RSA SHA256:ihdQa0i/HnNGvKP5m9obD9eorZ8Lhhc0yafWx7ReGkQ Mar 13 00:37:37.779565 sshd-session[4264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:37:37.786191 systemd-logind[1603]: New session 18 of user core. Mar 13 00:37:37.794895 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 13 00:37:38.238299 sshd[4268]: Connection closed by 4.153.228.146 port 37912 Mar 13 00:37:38.238980 sshd-session[4264]: pam_unix(sshd:session): session closed for user core Mar 13 00:37:38.248026 systemd[1]: sshd@17-89.167.5.55:22-4.153.228.146:37912.service: Deactivated successfully. Mar 13 00:37:38.252468 systemd[1]: session-18.scope: Deactivated successfully. Mar 13 00:37:38.255163 systemd-logind[1603]: Session 18 logged out. Waiting for processes to exit. Mar 13 00:37:38.258339 systemd-logind[1603]: Removed session 18. Mar 13 00:37:43.381698 systemd[1]: Started sshd@18-89.167.5.55:22-4.153.228.146:59642.service - OpenSSH per-connection server daemon (4.153.228.146:59642). Mar 13 00:37:44.031559 sshd[4283]: Accepted publickey for core from 4.153.228.146 port 59642 ssh2: RSA SHA256:ihdQa0i/HnNGvKP5m9obD9eorZ8Lhhc0yafWx7ReGkQ Mar 13 00:37:44.034497 sshd-session[4283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:37:44.042718 systemd-logind[1603]: New session 19 of user core. Mar 13 00:37:44.049890 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 13 00:37:44.469898 sshd[4286]: Connection closed by 4.153.228.146 port 59642 Mar 13 00:37:44.471965 sshd-session[4283]: pam_unix(sshd:session): session closed for user core Mar 13 00:37:44.478738 systemd[1]: sshd@18-89.167.5.55:22-4.153.228.146:59642.service: Deactivated successfully. Mar 13 00:37:44.482294 systemd[1]: session-19.scope: Deactivated successfully. Mar 13 00:37:44.484402 systemd-logind[1603]: Session 19 logged out. Waiting for processes to exit. Mar 13 00:37:44.487327 systemd-logind[1603]: Removed session 19. Mar 13 00:37:49.603857 systemd[1]: Started sshd@19-89.167.5.55:22-4.153.228.146:46832.service - OpenSSH per-connection server daemon (4.153.228.146:46832). Mar 13 00:37:50.264055 sshd[4298]: Accepted publickey for core from 4.153.228.146 port 46832 ssh2: RSA SHA256:ihdQa0i/HnNGvKP5m9obD9eorZ8Lhhc0yafWx7ReGkQ Mar 13 00:37:50.266608 sshd-session[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:37:50.276722 systemd-logind[1603]: New session 20 of user core. Mar 13 00:37:50.280933 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 13 00:37:50.713121 sshd[4301]: Connection closed by 4.153.228.146 port 46832 Mar 13 00:37:50.714933 sshd-session[4298]: pam_unix(sshd:session): session closed for user core Mar 13 00:37:50.722067 systemd[1]: sshd@19-89.167.5.55:22-4.153.228.146:46832.service: Deactivated successfully. Mar 13 00:37:50.726814 systemd[1]: session-20.scope: Deactivated successfully. Mar 13 00:37:50.728747 systemd-logind[1603]: Session 20 logged out. Waiting for processes to exit. Mar 13 00:37:50.731503 systemd-logind[1603]: Removed session 20. Mar 13 00:37:50.849499 systemd[1]: Started sshd@20-89.167.5.55:22-4.153.228.146:46836.service - OpenSSH per-connection server daemon (4.153.228.146:46836). Mar 13 00:37:51.523467 sshd[4313]: Accepted publickey for core from 4.153.228.146 port 46836 ssh2: RSA SHA256:ihdQa0i/HnNGvKP5m9obD9eorZ8Lhhc0yafWx7ReGkQ Mar 13 00:37:51.526074 sshd-session[4313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:37:51.534918 systemd-logind[1603]: New session 21 of user core. Mar 13 00:37:51.541004 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 13 00:37:53.079495 containerd[1630]: time="2026-03-13T00:37:53.079425675Z" level=info msg="StopContainer for \"9d4ead92ec6ae8667403960e90808dd27bb6be765e97b288c2dd3344089ab938\" with timeout 30 (s)" Mar 13 00:37:53.080640 containerd[1630]: time="2026-03-13T00:37:53.080518218Z" level=info msg="Stop container \"9d4ead92ec6ae8667403960e90808dd27bb6be765e97b288c2dd3344089ab938\" with signal terminated" Mar 13 00:37:53.098957 systemd[1]: cri-containerd-9d4ead92ec6ae8667403960e90808dd27bb6be765e97b288c2dd3344089ab938.scope: Deactivated successfully. Mar 13 00:37:53.099843 containerd[1630]: time="2026-03-13T00:37:53.099678282Z" level=info msg="received container exit event container_id:\"9d4ead92ec6ae8667403960e90808dd27bb6be765e97b288c2dd3344089ab938\" id:\"9d4ead92ec6ae8667403960e90808dd27bb6be765e97b288c2dd3344089ab938\" pid:3348 exited_at:{seconds:1773362273 nanos:99468552}" Mar 13 00:37:53.109374 containerd[1630]: time="2026-03-13T00:37:53.109343180Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 13 00:37:53.116754 containerd[1630]: time="2026-03-13T00:37:53.116718681Z" level=info msg="StopContainer for \"96df9e8fa6bacf63ee7e3b9e384a652f99eef7bf72e5ff81dd934860e9eb5cf2\" with timeout 2 (s)" Mar 13 00:37:53.118153 containerd[1630]: time="2026-03-13T00:37:53.117732464Z" level=info msg="Stop container \"96df9e8fa6bacf63ee7e3b9e384a652f99eef7bf72e5ff81dd934860e9eb5cf2\" with signal terminated" Mar 13 00:37:53.125297 systemd-networkd[1503]: lxc_health: Link DOWN Mar 13 00:37:53.125797 systemd-networkd[1503]: lxc_health: Lost carrier Mar 13 00:37:53.129545 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d4ead92ec6ae8667403960e90808dd27bb6be765e97b288c2dd3344089ab938-rootfs.mount: Deactivated successfully. Mar 13 00:37:53.140983 systemd[1]: cri-containerd-96df9e8fa6bacf63ee7e3b9e384a652f99eef7bf72e5ff81dd934860e9eb5cf2.scope: Deactivated successfully. Mar 13 00:37:53.141230 systemd[1]: cri-containerd-96df9e8fa6bacf63ee7e3b9e384a652f99eef7bf72e5ff81dd934860e9eb5cf2.scope: Consumed 5.077s CPU time, 126.5M memory peak, 128K read from disk, 13.3M written to disk. Mar 13 00:37:53.143789 containerd[1630]: time="2026-03-13T00:37:53.143443176Z" level=info msg="received container exit event container_id:\"96df9e8fa6bacf63ee7e3b9e384a652f99eef7bf72e5ff81dd934860e9eb5cf2\" id:\"96df9e8fa6bacf63ee7e3b9e384a652f99eef7bf72e5ff81dd934860e9eb5cf2\" pid:3437 exited_at:{seconds:1773362273 nanos:143109915}" Mar 13 00:37:53.158242 containerd[1630]: time="2026-03-13T00:37:53.157248405Z" level=info msg="StopContainer for \"9d4ead92ec6ae8667403960e90808dd27bb6be765e97b288c2dd3344089ab938\" returns successfully" Mar 13 00:37:53.158613 containerd[1630]: time="2026-03-13T00:37:53.158403028Z" level=info msg="StopPodSandbox for \"291a1bf406283c2a8b40059f9a20576a60217981a01b8e500aa9382dd0490483\"" Mar 13 00:37:53.158613 containerd[1630]: time="2026-03-13T00:37:53.158442659Z" level=info msg="Container to stop \"9d4ead92ec6ae8667403960e90808dd27bb6be765e97b288c2dd3344089ab938\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 00:37:53.168038 systemd[1]: cri-containerd-291a1bf406283c2a8b40059f9a20576a60217981a01b8e500aa9382dd0490483.scope: Deactivated successfully. Mar 13 00:37:53.174359 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96df9e8fa6bacf63ee7e3b9e384a652f99eef7bf72e5ff81dd934860e9eb5cf2-rootfs.mount: Deactivated successfully. Mar 13 00:37:53.177368 containerd[1630]: time="2026-03-13T00:37:53.177312412Z" level=info msg="received sandbox exit event container_id:\"291a1bf406283c2a8b40059f9a20576a60217981a01b8e500aa9382dd0490483\" id:\"291a1bf406283c2a8b40059f9a20576a60217981a01b8e500aa9382dd0490483\" exit_status:137 exited_at:{seconds:1773362273 nanos:176551570}" monitor_name=podsandbox Mar 13 00:37:53.186119 containerd[1630]: time="2026-03-13T00:37:53.185727266Z" level=info msg="StopContainer for \"96df9e8fa6bacf63ee7e3b9e384a652f99eef7bf72e5ff81dd934860e9eb5cf2\" returns successfully" Mar 13 00:37:53.186315 containerd[1630]: time="2026-03-13T00:37:53.186289087Z" level=info msg="StopPodSandbox for \"10d8d7f80d483b860150838edec122c6a660ac3d6ae18559d4cb51fee92fd94c\"" Mar 13 00:37:53.186348 containerd[1630]: time="2026-03-13T00:37:53.186323777Z" level=info msg="Container to stop \"0e79511dca42ada743b348d583e509659c848600b140287ed54207990b9d4902\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 00:37:53.186348 containerd[1630]: time="2026-03-13T00:37:53.186331007Z" level=info msg="Container to stop \"3b799423154771b3aacc0f8c47e8c508eb9a70fc6f6ed077781ac5cb9a476563\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 00:37:53.186348 containerd[1630]: time="2026-03-13T00:37:53.186337307Z" level=info msg="Container to stop \"d67827ad9076dc2c7eba3b9e94a4c2061b0e486d9ac6689a1856bd38e48dd743\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 00:37:53.186348 containerd[1630]: time="2026-03-13T00:37:53.186343057Z" level=info msg="Container to stop \"a9ee74138e5a4c36abe239f2e3363ee24c227bd5ee0653c7248a0a3ba8977d93\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 00:37:53.186348 containerd[1630]: time="2026-03-13T00:37:53.186348757Z" level=info msg="Container to stop \"96df9e8fa6bacf63ee7e3b9e384a652f99eef7bf72e5ff81dd934860e9eb5cf2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 00:37:53.194153 systemd[1]: cri-containerd-10d8d7f80d483b860150838edec122c6a660ac3d6ae18559d4cb51fee92fd94c.scope: Deactivated successfully. Mar 13 00:37:53.198165 containerd[1630]: time="2026-03-13T00:37:53.198119961Z" level=info msg="received sandbox exit event container_id:\"10d8d7f80d483b860150838edec122c6a660ac3d6ae18559d4cb51fee92fd94c\" id:\"10d8d7f80d483b860150838edec122c6a660ac3d6ae18559d4cb51fee92fd94c\" exit_status:137 exited_at:{seconds:1773362273 nanos:196916797}" monitor_name=podsandbox Mar 13 00:37:53.198929 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-291a1bf406283c2a8b40059f9a20576a60217981a01b8e500aa9382dd0490483-rootfs.mount: Deactivated successfully. Mar 13 00:37:53.204617 containerd[1630]: time="2026-03-13T00:37:53.204570819Z" level=info msg="shim disconnected" id=291a1bf406283c2a8b40059f9a20576a60217981a01b8e500aa9382dd0490483 namespace=k8s.io Mar 13 00:37:53.204710 containerd[1630]: time="2026-03-13T00:37:53.204700259Z" level=warning msg="cleaning up after shim disconnected" id=291a1bf406283c2a8b40059f9a20576a60217981a01b8e500aa9382dd0490483 namespace=k8s.io Mar 13 00:37:53.204922 containerd[1630]: time="2026-03-13T00:37:53.204851290Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 13 00:37:53.217069 containerd[1630]: time="2026-03-13T00:37:53.217041524Z" level=info msg="received sandbox container exit event sandbox_id:\"291a1bf406283c2a8b40059f9a20576a60217981a01b8e500aa9382dd0490483\" exit_status:137 exited_at:{seconds:1773362273 nanos:176551570}" monitor_name=criService Mar 13 00:37:53.219501 containerd[1630]: time="2026-03-13T00:37:53.218784069Z" level=info msg="TearDown network for sandbox \"291a1bf406283c2a8b40059f9a20576a60217981a01b8e500aa9382dd0490483\" successfully" Mar 13 00:37:53.219142 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-291a1bf406283c2a8b40059f9a20576a60217981a01b8e500aa9382dd0490483-shm.mount: Deactivated successfully. Mar 13 00:37:53.219222 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10d8d7f80d483b860150838edec122c6a660ac3d6ae18559d4cb51fee92fd94c-rootfs.mount: Deactivated successfully. Mar 13 00:37:53.219983 containerd[1630]: time="2026-03-13T00:37:53.219613251Z" level=info msg="StopPodSandbox for \"291a1bf406283c2a8b40059f9a20576a60217981a01b8e500aa9382dd0490483\" returns successfully" Mar 13 00:37:53.226141 containerd[1630]: time="2026-03-13T00:37:53.226117920Z" level=info msg="shim disconnected" id=10d8d7f80d483b860150838edec122c6a660ac3d6ae18559d4cb51fee92fd94c namespace=k8s.io Mar 13 00:37:53.226141 containerd[1630]: time="2026-03-13T00:37:53.226137320Z" level=warning msg="cleaning up after shim disconnected" id=10d8d7f80d483b860150838edec122c6a660ac3d6ae18559d4cb51fee92fd94c namespace=k8s.io Mar 13 00:37:53.226319 containerd[1630]: time="2026-03-13T00:37:53.226143980Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 13 00:37:53.240140 containerd[1630]: time="2026-03-13T00:37:53.240100929Z" level=info msg="received sandbox container exit event sandbox_id:\"10d8d7f80d483b860150838edec122c6a660ac3d6ae18559d4cb51fee92fd94c\" exit_status:137 exited_at:{seconds:1773362273 nanos:196916797}" monitor_name=criService Mar 13 00:37:53.242229 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-10d8d7f80d483b860150838edec122c6a660ac3d6ae18559d4cb51fee92fd94c-shm.mount: Deactivated successfully. Mar 13 00:37:53.243458 containerd[1630]: time="2026-03-13T00:37:53.242913607Z" level=info msg="TearDown network for sandbox \"10d8d7f80d483b860150838edec122c6a660ac3d6ae18559d4cb51fee92fd94c\" successfully" Mar 13 00:37:53.243458 containerd[1630]: time="2026-03-13T00:37:53.242928377Z" level=info msg="StopPodSandbox for \"10d8d7f80d483b860150838edec122c6a660ac3d6ae18559d4cb51fee92fd94c\" returns successfully" Mar 13 00:37:53.309546 kubelet[2778]: I0313 00:37:53.309482 2778 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/92930507-60a7-45ed-a90c-f0e48f25a207-cilium-config-path\") pod \"92930507-60a7-45ed-a90c-f0e48f25a207\" (UID: \"92930507-60a7-45ed-a90c-f0e48f25a207\") " Mar 13 00:37:53.309546 kubelet[2778]: I0313 00:37:53.309547 2778 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p66bx\" (UniqueName: \"kubernetes.io/projected/92930507-60a7-45ed-a90c-f0e48f25a207-kube-api-access-p66bx\") pod \"92930507-60a7-45ed-a90c-f0e48f25a207\" (UID: \"92930507-60a7-45ed-a90c-f0e48f25a207\") " Mar 13 00:37:53.315076 kubelet[2778]: I0313 00:37:53.314971 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92930507-60a7-45ed-a90c-f0e48f25a207-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "92930507-60a7-45ed-a90c-f0e48f25a207" (UID: "92930507-60a7-45ed-a90c-f0e48f25a207"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 13 00:37:53.315076 kubelet[2778]: I0313 00:37:53.315010 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92930507-60a7-45ed-a90c-f0e48f25a207-kube-api-access-p66bx" (OuterVolumeSpecName: "kube-api-access-p66bx") pod "92930507-60a7-45ed-a90c-f0e48f25a207" (UID: "92930507-60a7-45ed-a90c-f0e48f25a207"). InnerVolumeSpecName "kube-api-access-p66bx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 13 00:37:53.410818 kubelet[2778]: I0313 00:37:53.410727 2778 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1685953d-e272-42dc-bb87-e44d2fb34ca8-hubble-tls\") pod \"1685953d-e272-42dc-bb87-e44d2fb34ca8\" (UID: \"1685953d-e272-42dc-bb87-e44d2fb34ca8\") " Mar 13 00:37:53.410818 kubelet[2778]: I0313 00:37:53.410789 2778 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1685953d-e272-42dc-bb87-e44d2fb34ca8-cilium-cgroup\") pod \"1685953d-e272-42dc-bb87-e44d2fb34ca8\" (UID: \"1685953d-e272-42dc-bb87-e44d2fb34ca8\") " Mar 13 00:37:53.410818 kubelet[2778]: I0313 00:37:53.410817 2778 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1685953d-e272-42dc-bb87-e44d2fb34ca8-cilium-config-path\") pod \"1685953d-e272-42dc-bb87-e44d2fb34ca8\" (UID: \"1685953d-e272-42dc-bb87-e44d2fb34ca8\") " Mar 13 00:37:53.411070 kubelet[2778]: I0313 00:37:53.410840 2778 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1685953d-e272-42dc-bb87-e44d2fb34ca8-hostproc\") pod \"1685953d-e272-42dc-bb87-e44d2fb34ca8\" (UID: \"1685953d-e272-42dc-bb87-e44d2fb34ca8\") " Mar 13 00:37:53.411070 kubelet[2778]: I0313 00:37:53.410866 2778 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1685953d-e272-42dc-bb87-e44d2fb34ca8-cilium-run\") pod \"1685953d-e272-42dc-bb87-e44d2fb34ca8\" (UID: \"1685953d-e272-42dc-bb87-e44d2fb34ca8\") " Mar 13 00:37:53.411070 kubelet[2778]: I0313 00:37:53.410886 2778 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1685953d-e272-42dc-bb87-e44d2fb34ca8-xtables-lock\") pod \"1685953d-e272-42dc-bb87-e44d2fb34ca8\" (UID: \"1685953d-e272-42dc-bb87-e44d2fb34ca8\") " Mar 13 00:37:53.411070 kubelet[2778]: I0313 00:37:53.410909 2778 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1685953d-e272-42dc-bb87-e44d2fb34ca8-host-proc-sys-net\") pod \"1685953d-e272-42dc-bb87-e44d2fb34ca8\" (UID: \"1685953d-e272-42dc-bb87-e44d2fb34ca8\") " Mar 13 00:37:53.411070 kubelet[2778]: I0313 00:37:53.410934 2778 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1685953d-e272-42dc-bb87-e44d2fb34ca8-bpf-maps\") pod \"1685953d-e272-42dc-bb87-e44d2fb34ca8\" (UID: \"1685953d-e272-42dc-bb87-e44d2fb34ca8\") " Mar 13 00:37:53.411070 kubelet[2778]: I0313 00:37:53.410955 2778 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1685953d-e272-42dc-bb87-e44d2fb34ca8-etc-cni-netd\") pod \"1685953d-e272-42dc-bb87-e44d2fb34ca8\" (UID: \"1685953d-e272-42dc-bb87-e44d2fb34ca8\") " Mar 13 00:37:53.411322 kubelet[2778]: I0313 00:37:53.410981 2778 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mccxh\" (UniqueName: \"kubernetes.io/projected/1685953d-e272-42dc-bb87-e44d2fb34ca8-kube-api-access-mccxh\") pod \"1685953d-e272-42dc-bb87-e44d2fb34ca8\" (UID: \"1685953d-e272-42dc-bb87-e44d2fb34ca8\") " Mar 13 00:37:53.411322 kubelet[2778]: I0313 00:37:53.411004 2778 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1685953d-e272-42dc-bb87-e44d2fb34ca8-lib-modules\") pod \"1685953d-e272-42dc-bb87-e44d2fb34ca8\" (UID: \"1685953d-e272-42dc-bb87-e44d2fb34ca8\") " Mar 13 00:37:53.411322 kubelet[2778]: I0313 00:37:53.411024 2778 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1685953d-e272-42dc-bb87-e44d2fb34ca8-host-proc-sys-kernel\") pod \"1685953d-e272-42dc-bb87-e44d2fb34ca8\" (UID: \"1685953d-e272-42dc-bb87-e44d2fb34ca8\") " Mar 13 00:37:53.411322 kubelet[2778]: I0313 00:37:53.411047 2778 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1685953d-e272-42dc-bb87-e44d2fb34ca8-cni-path\") pod \"1685953d-e272-42dc-bb87-e44d2fb34ca8\" (UID: \"1685953d-e272-42dc-bb87-e44d2fb34ca8\") " Mar 13 00:37:53.411322 kubelet[2778]: I0313 00:37:53.411069 2778 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1685953d-e272-42dc-bb87-e44d2fb34ca8-clustermesh-secrets\") pod \"1685953d-e272-42dc-bb87-e44d2fb34ca8\" (UID: \"1685953d-e272-42dc-bb87-e44d2fb34ca8\") " Mar 13 00:37:53.411322 kubelet[2778]: I0313 00:37:53.411116 2778 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/92930507-60a7-45ed-a90c-f0e48f25a207-cilium-config-path\") on node \"ci-4459-2-4-n-a4844b4806\" DevicePath \"\"" Mar 13 00:37:53.411551 kubelet[2778]: I0313 00:37:53.411132 2778 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p66bx\" (UniqueName: \"kubernetes.io/projected/92930507-60a7-45ed-a90c-f0e48f25a207-kube-api-access-p66bx\") on node \"ci-4459-2-4-n-a4844b4806\" DevicePath \"\"" Mar 13 00:37:53.412666 kubelet[2778]: I0313 00:37:53.411930 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1685953d-e272-42dc-bb87-e44d2fb34ca8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1685953d-e272-42dc-bb87-e44d2fb34ca8" (UID: "1685953d-e272-42dc-bb87-e44d2fb34ca8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:37:53.413973 kubelet[2778]: I0313 00:37:53.413904 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1685953d-e272-42dc-bb87-e44d2fb34ca8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1685953d-e272-42dc-bb87-e44d2fb34ca8" (UID: "1685953d-e272-42dc-bb87-e44d2fb34ca8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:37:53.413973 kubelet[2778]: I0313 00:37:53.413958 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1685953d-e272-42dc-bb87-e44d2fb34ca8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1685953d-e272-42dc-bb87-e44d2fb34ca8" (UID: "1685953d-e272-42dc-bb87-e44d2fb34ca8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:37:53.414661 kubelet[2778]: I0313 00:37:53.414593 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1685953d-e272-42dc-bb87-e44d2fb34ca8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1685953d-e272-42dc-bb87-e44d2fb34ca8" (UID: "1685953d-e272-42dc-bb87-e44d2fb34ca8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:37:53.415829 kubelet[2778]: I0313 00:37:53.415711 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1685953d-e272-42dc-bb87-e44d2fb34ca8-hostproc" (OuterVolumeSpecName: "hostproc") pod "1685953d-e272-42dc-bb87-e44d2fb34ca8" (UID: "1685953d-e272-42dc-bb87-e44d2fb34ca8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:37:53.416056 kubelet[2778]: I0313 00:37:53.416032 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1685953d-e272-42dc-bb87-e44d2fb34ca8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1685953d-e272-42dc-bb87-e44d2fb34ca8" (UID: "1685953d-e272-42dc-bb87-e44d2fb34ca8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:37:53.416253 kubelet[2778]: I0313 00:37:53.416233 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1685953d-e272-42dc-bb87-e44d2fb34ca8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1685953d-e272-42dc-bb87-e44d2fb34ca8" (UID: "1685953d-e272-42dc-bb87-e44d2fb34ca8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:37:53.416403 kubelet[2778]: I0313 00:37:53.416382 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1685953d-e272-42dc-bb87-e44d2fb34ca8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1685953d-e272-42dc-bb87-e44d2fb34ca8" (UID: "1685953d-e272-42dc-bb87-e44d2fb34ca8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:37:53.416549 kubelet[2778]: I0313 00:37:53.416531 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1685953d-e272-42dc-bb87-e44d2fb34ca8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1685953d-e272-42dc-bb87-e44d2fb34ca8" (UID: "1685953d-e272-42dc-bb87-e44d2fb34ca8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:37:53.416902 kubelet[2778]: I0313 00:37:53.416693 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1685953d-e272-42dc-bb87-e44d2fb34ca8-cni-path" (OuterVolumeSpecName: "cni-path") pod "1685953d-e272-42dc-bb87-e44d2fb34ca8" (UID: "1685953d-e272-42dc-bb87-e44d2fb34ca8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:37:53.421783 kubelet[2778]: I0313 00:37:53.421611 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1685953d-e272-42dc-bb87-e44d2fb34ca8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1685953d-e272-42dc-bb87-e44d2fb34ca8" (UID: "1685953d-e272-42dc-bb87-e44d2fb34ca8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 13 00:37:53.422662 kubelet[2778]: I0313 00:37:53.422501 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1685953d-e272-42dc-bb87-e44d2fb34ca8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1685953d-e272-42dc-bb87-e44d2fb34ca8" (UID: "1685953d-e272-42dc-bb87-e44d2fb34ca8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 13 00:37:53.423960 kubelet[2778]: I0313 00:37:53.423920 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1685953d-e272-42dc-bb87-e44d2fb34ca8-kube-api-access-mccxh" (OuterVolumeSpecName: "kube-api-access-mccxh") pod "1685953d-e272-42dc-bb87-e44d2fb34ca8" (UID: "1685953d-e272-42dc-bb87-e44d2fb34ca8"). InnerVolumeSpecName "kube-api-access-mccxh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 13 00:37:53.425931 kubelet[2778]: I0313 00:37:53.425866 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1685953d-e272-42dc-bb87-e44d2fb34ca8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1685953d-e272-42dc-bb87-e44d2fb34ca8" (UID: "1685953d-e272-42dc-bb87-e44d2fb34ca8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 13 00:37:53.512445 kubelet[2778]: I0313 00:37:53.512366 2778 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1685953d-e272-42dc-bb87-e44d2fb34ca8-cni-path\") on node \"ci-4459-2-4-n-a4844b4806\" DevicePath \"\"" Mar 13 00:37:53.512445 kubelet[2778]: I0313 00:37:53.512408 2778 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1685953d-e272-42dc-bb87-e44d2fb34ca8-clustermesh-secrets\") on node \"ci-4459-2-4-n-a4844b4806\" DevicePath \"\"" Mar 13 00:37:53.512445 kubelet[2778]: I0313 00:37:53.512426 2778 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1685953d-e272-42dc-bb87-e44d2fb34ca8-hubble-tls\") on node \"ci-4459-2-4-n-a4844b4806\" DevicePath \"\"" Mar 13 00:37:53.512445 kubelet[2778]: I0313 00:37:53.512442 2778 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1685953d-e272-42dc-bb87-e44d2fb34ca8-cilium-cgroup\") on node \"ci-4459-2-4-n-a4844b4806\" DevicePath \"\"" Mar 13 00:37:53.512445 kubelet[2778]: I0313 00:37:53.512457 2778 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1685953d-e272-42dc-bb87-e44d2fb34ca8-cilium-config-path\") on node \"ci-4459-2-4-n-a4844b4806\" DevicePath \"\"" Mar 13 00:37:53.512871 kubelet[2778]: I0313 00:37:53.512472 2778 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1685953d-e272-42dc-bb87-e44d2fb34ca8-hostproc\") on node \"ci-4459-2-4-n-a4844b4806\" DevicePath \"\"" Mar 13 00:37:53.512871 kubelet[2778]: I0313 00:37:53.512487 2778 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1685953d-e272-42dc-bb87-e44d2fb34ca8-cilium-run\") on node \"ci-4459-2-4-n-a4844b4806\" DevicePath \"\"" Mar 13 00:37:53.512871 kubelet[2778]: I0313 00:37:53.512499 2778 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1685953d-e272-42dc-bb87-e44d2fb34ca8-xtables-lock\") on node \"ci-4459-2-4-n-a4844b4806\" DevicePath \"\"" Mar 13 00:37:53.512871 kubelet[2778]: I0313 00:37:53.512512 2778 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1685953d-e272-42dc-bb87-e44d2fb34ca8-host-proc-sys-net\") on node \"ci-4459-2-4-n-a4844b4806\" DevicePath \"\"" Mar 13 00:37:53.512871 kubelet[2778]: I0313 00:37:53.512525 2778 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1685953d-e272-42dc-bb87-e44d2fb34ca8-bpf-maps\") on node \"ci-4459-2-4-n-a4844b4806\" DevicePath \"\"" Mar 13 00:37:53.512871 kubelet[2778]: I0313 00:37:53.512539 2778 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1685953d-e272-42dc-bb87-e44d2fb34ca8-etc-cni-netd\") on node \"ci-4459-2-4-n-a4844b4806\" DevicePath \"\"" Mar 13 00:37:53.512871 kubelet[2778]: I0313 00:37:53.512553 2778 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mccxh\" (UniqueName: \"kubernetes.io/projected/1685953d-e272-42dc-bb87-e44d2fb34ca8-kube-api-access-mccxh\") on node \"ci-4459-2-4-n-a4844b4806\" DevicePath \"\"" Mar 13 00:37:53.512871 kubelet[2778]: I0313 00:37:53.512567 2778 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1685953d-e272-42dc-bb87-e44d2fb34ca8-lib-modules\") on node \"ci-4459-2-4-n-a4844b4806\" DevicePath \"\"" Mar 13 00:37:53.513173 kubelet[2778]: I0313 00:37:53.512582 2778 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1685953d-e272-42dc-bb87-e44d2fb34ca8-host-proc-sys-kernel\") on node \"ci-4459-2-4-n-a4844b4806\" DevicePath \"\"" Mar 13 00:37:53.640665 systemd[1]: Removed slice kubepods-besteffort-pod92930507_60a7_45ed_a90c_f0e48f25a207.slice - libcontainer container kubepods-besteffort-pod92930507_60a7_45ed_a90c_f0e48f25a207.slice. Mar 13 00:37:53.648001 systemd[1]: Removed slice kubepods-burstable-pod1685953d_e272_42dc_bb87_e44d2fb34ca8.slice - libcontainer container kubepods-burstable-pod1685953d_e272_42dc_bb87_e44d2fb34ca8.slice. Mar 13 00:37:53.648158 systemd[1]: kubepods-burstable-pod1685953d_e272_42dc_bb87_e44d2fb34ca8.slice: Consumed 5.170s CPU time, 127M memory peak, 128K read from disk, 13.3M written to disk. Mar 13 00:37:54.069490 kubelet[2778]: I0313 00:37:54.069411 2778 scope.go:117] "RemoveContainer" containerID="9d4ead92ec6ae8667403960e90808dd27bb6be765e97b288c2dd3344089ab938" Mar 13 00:37:54.081744 containerd[1630]: time="2026-03-13T00:37:54.079604656Z" level=info msg="RemoveContainer for \"9d4ead92ec6ae8667403960e90808dd27bb6be765e97b288c2dd3344089ab938\"" Mar 13 00:37:54.092573 containerd[1630]: time="2026-03-13T00:37:54.092361461Z" level=info msg="RemoveContainer for \"9d4ead92ec6ae8667403960e90808dd27bb6be765e97b288c2dd3344089ab938\" returns successfully" Mar 13 00:37:54.096931 kubelet[2778]: I0313 00:37:54.096879 2778 scope.go:117] "RemoveContainer" containerID="9d4ead92ec6ae8667403960e90808dd27bb6be765e97b288c2dd3344089ab938" Mar 13 00:37:54.098071 containerd[1630]: time="2026-03-13T00:37:54.097949477Z" level=error msg="ContainerStatus for \"9d4ead92ec6ae8667403960e90808dd27bb6be765e97b288c2dd3344089ab938\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9d4ead92ec6ae8667403960e90808dd27bb6be765e97b288c2dd3344089ab938\": not found" Mar 13 00:37:54.099551 kubelet[2778]: E0313 00:37:54.098611 2778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9d4ead92ec6ae8667403960e90808dd27bb6be765e97b288c2dd3344089ab938\": not found" containerID="9d4ead92ec6ae8667403960e90808dd27bb6be765e97b288c2dd3344089ab938" Mar 13 00:37:54.099551 kubelet[2778]: I0313 00:37:54.098708 2778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9d4ead92ec6ae8667403960e90808dd27bb6be765e97b288c2dd3344089ab938"} err="failed to get container status \"9d4ead92ec6ae8667403960e90808dd27bb6be765e97b288c2dd3344089ab938\": rpc error: code = NotFound desc = an error occurred when try to find container \"9d4ead92ec6ae8667403960e90808dd27bb6be765e97b288c2dd3344089ab938\": not found" Mar 13 00:37:54.099551 kubelet[2778]: I0313 00:37:54.098825 2778 scope.go:117] "RemoveContainer" containerID="96df9e8fa6bacf63ee7e3b9e384a652f99eef7bf72e5ff81dd934860e9eb5cf2" Mar 13 00:37:54.108542 containerd[1630]: time="2026-03-13T00:37:54.108453626Z" level=info msg="RemoveContainer for \"96df9e8fa6bacf63ee7e3b9e384a652f99eef7bf72e5ff81dd934860e9eb5cf2\"" Mar 13 00:37:54.118687 containerd[1630]: time="2026-03-13T00:37:54.118618384Z" level=info msg="RemoveContainer for \"96df9e8fa6bacf63ee7e3b9e384a652f99eef7bf72e5ff81dd934860e9eb5cf2\" returns successfully" Mar 13 00:37:54.119129 kubelet[2778]: I0313 00:37:54.119107 2778 scope.go:117] "RemoveContainer" containerID="d67827ad9076dc2c7eba3b9e94a4c2061b0e486d9ac6689a1856bd38e48dd743" Mar 13 00:37:54.121454 containerd[1630]: time="2026-03-13T00:37:54.121425752Z" level=info msg="RemoveContainer for \"d67827ad9076dc2c7eba3b9e94a4c2061b0e486d9ac6689a1856bd38e48dd743\"" Mar 13 00:37:54.139926 containerd[1630]: time="2026-03-13T00:37:54.127111757Z" level=info msg="RemoveContainer for \"d67827ad9076dc2c7eba3b9e94a4c2061b0e486d9ac6689a1856bd38e48dd743\" returns successfully" Mar 13 00:37:54.140023 kubelet[2778]: I0313 00:37:54.138732 2778 scope.go:117] "RemoveContainer" containerID="3b799423154771b3aacc0f8c47e8c508eb9a70fc6f6ed077781ac5cb9a476563" Mar 13 00:37:54.147643 containerd[1630]: time="2026-03-13T00:37:54.147604704Z" level=info msg="RemoveContainer for \"3b799423154771b3aacc0f8c47e8c508eb9a70fc6f6ed077781ac5cb9a476563\"" Mar 13 00:37:54.148015 systemd[1]: var-lib-kubelet-pods-1685953d\x2de272\x2d42dc\x2dbb87\x2de44d2fb34ca8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmccxh.mount: Deactivated successfully. Mar 13 00:37:54.148103 systemd[1]: var-lib-kubelet-pods-92930507\x2d60a7\x2d45ed\x2da90c\x2df0e48f25a207-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp66bx.mount: Deactivated successfully. Mar 13 00:37:54.148165 systemd[1]: var-lib-kubelet-pods-1685953d\x2de272\x2d42dc\x2dbb87\x2de44d2fb34ca8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 13 00:37:54.148222 systemd[1]: var-lib-kubelet-pods-1685953d\x2de272\x2d42dc\x2dbb87\x2de44d2fb34ca8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 13 00:37:54.153475 containerd[1630]: time="2026-03-13T00:37:54.153459530Z" level=info msg="RemoveContainer for \"3b799423154771b3aacc0f8c47e8c508eb9a70fc6f6ed077781ac5cb9a476563\" returns successfully" Mar 13 00:37:54.153752 kubelet[2778]: I0313 00:37:54.153650 2778 scope.go:117] "RemoveContainer" containerID="a9ee74138e5a4c36abe239f2e3363ee24c227bd5ee0653c7248a0a3ba8977d93" Mar 13 00:37:54.154576 containerd[1630]: time="2026-03-13T00:37:54.154557733Z" level=info msg="RemoveContainer for \"a9ee74138e5a4c36abe239f2e3363ee24c227bd5ee0653c7248a0a3ba8977d93\"" Mar 13 00:37:54.157130 containerd[1630]: time="2026-03-13T00:37:54.157113230Z" level=info msg="RemoveContainer for \"a9ee74138e5a4c36abe239f2e3363ee24c227bd5ee0653c7248a0a3ba8977d93\" returns successfully" Mar 13 00:37:54.157361 kubelet[2778]: I0313 00:37:54.157332 2778 scope.go:117] "RemoveContainer" containerID="0e79511dca42ada743b348d583e509659c848600b140287ed54207990b9d4902" Mar 13 00:37:54.158414 containerd[1630]: time="2026-03-13T00:37:54.158387194Z" level=info msg="RemoveContainer for \"0e79511dca42ada743b348d583e509659c848600b140287ed54207990b9d4902\"" Mar 13 00:37:54.161069 containerd[1630]: time="2026-03-13T00:37:54.161049151Z" level=info msg="RemoveContainer for \"0e79511dca42ada743b348d583e509659c848600b140287ed54207990b9d4902\" returns successfully" Mar 13 00:37:54.161174 kubelet[2778]: I0313 00:37:54.161157 2778 scope.go:117] "RemoveContainer" containerID="96df9e8fa6bacf63ee7e3b9e384a652f99eef7bf72e5ff81dd934860e9eb5cf2" Mar 13 00:37:54.161414 containerd[1630]: time="2026-03-13T00:37:54.161365362Z" level=error msg="ContainerStatus for \"96df9e8fa6bacf63ee7e3b9e384a652f99eef7bf72e5ff81dd934860e9eb5cf2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"96df9e8fa6bacf63ee7e3b9e384a652f99eef7bf72e5ff81dd934860e9eb5cf2\": not found" Mar 13 00:37:54.161584 kubelet[2778]: E0313 00:37:54.161516 2778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"96df9e8fa6bacf63ee7e3b9e384a652f99eef7bf72e5ff81dd934860e9eb5cf2\": not found" containerID="96df9e8fa6bacf63ee7e3b9e384a652f99eef7bf72e5ff81dd934860e9eb5cf2" Mar 13 00:37:54.161669 kubelet[2778]: I0313 00:37:54.161584 2778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"96df9e8fa6bacf63ee7e3b9e384a652f99eef7bf72e5ff81dd934860e9eb5cf2"} err="failed to get container status \"96df9e8fa6bacf63ee7e3b9e384a652f99eef7bf72e5ff81dd934860e9eb5cf2\": rpc error: code = NotFound desc = an error occurred when try to find container \"96df9e8fa6bacf63ee7e3b9e384a652f99eef7bf72e5ff81dd934860e9eb5cf2\": not found" Mar 13 00:37:54.161669 kubelet[2778]: I0313 00:37:54.161599 2778 scope.go:117] "RemoveContainer" containerID="d67827ad9076dc2c7eba3b9e94a4c2061b0e486d9ac6689a1856bd38e48dd743" Mar 13 00:37:54.161806 containerd[1630]: time="2026-03-13T00:37:54.161791893Z" level=error msg="ContainerStatus for \"d67827ad9076dc2c7eba3b9e94a4c2061b0e486d9ac6689a1856bd38e48dd743\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d67827ad9076dc2c7eba3b9e94a4c2061b0e486d9ac6689a1856bd38e48dd743\": not found" Mar 13 00:37:54.161971 kubelet[2778]: E0313 00:37:54.161945 2778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d67827ad9076dc2c7eba3b9e94a4c2061b0e486d9ac6689a1856bd38e48dd743\": not found" containerID="d67827ad9076dc2c7eba3b9e94a4c2061b0e486d9ac6689a1856bd38e48dd743" Mar 13 00:37:54.161996 kubelet[2778]: I0313 00:37:54.161977 2778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d67827ad9076dc2c7eba3b9e94a4c2061b0e486d9ac6689a1856bd38e48dd743"} err="failed to get container status \"d67827ad9076dc2c7eba3b9e94a4c2061b0e486d9ac6689a1856bd38e48dd743\": rpc error: code = NotFound desc = an error occurred when try to find container \"d67827ad9076dc2c7eba3b9e94a4c2061b0e486d9ac6689a1856bd38e48dd743\": not found" Mar 13 00:37:54.161996 kubelet[2778]: I0313 00:37:54.161989 2778 scope.go:117] "RemoveContainer" containerID="3b799423154771b3aacc0f8c47e8c508eb9a70fc6f6ed077781ac5cb9a476563" Mar 13 00:37:54.162141 containerd[1630]: time="2026-03-13T00:37:54.162103144Z" level=error msg="ContainerStatus for \"3b799423154771b3aacc0f8c47e8c508eb9a70fc6f6ed077781ac5cb9a476563\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3b799423154771b3aacc0f8c47e8c508eb9a70fc6f6ed077781ac5cb9a476563\": not found" Mar 13 00:37:54.162218 kubelet[2778]: E0313 00:37:54.162191 2778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3b799423154771b3aacc0f8c47e8c508eb9a70fc6f6ed077781ac5cb9a476563\": not found" containerID="3b799423154771b3aacc0f8c47e8c508eb9a70fc6f6ed077781ac5cb9a476563" Mar 13 00:37:54.162218 kubelet[2778]: I0313 00:37:54.162206 2778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3b799423154771b3aacc0f8c47e8c508eb9a70fc6f6ed077781ac5cb9a476563"} err="failed to get container status \"3b799423154771b3aacc0f8c47e8c508eb9a70fc6f6ed077781ac5cb9a476563\": rpc error: code = NotFound desc = an error occurred when try to find container \"3b799423154771b3aacc0f8c47e8c508eb9a70fc6f6ed077781ac5cb9a476563\": not found" Mar 13 00:37:54.162218 kubelet[2778]: I0313 00:37:54.162214 2778 scope.go:117] "RemoveContainer" containerID="a9ee74138e5a4c36abe239f2e3363ee24c227bd5ee0653c7248a0a3ba8977d93" Mar 13 00:37:54.162406 containerd[1630]: time="2026-03-13T00:37:54.162372085Z" level=error msg="ContainerStatus for \"a9ee74138e5a4c36abe239f2e3363ee24c227bd5ee0653c7248a0a3ba8977d93\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a9ee74138e5a4c36abe239f2e3363ee24c227bd5ee0653c7248a0a3ba8977d93\": not found" Mar 13 00:37:54.162482 kubelet[2778]: E0313 00:37:54.162458 2778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a9ee74138e5a4c36abe239f2e3363ee24c227bd5ee0653c7248a0a3ba8977d93\": not found" containerID="a9ee74138e5a4c36abe239f2e3363ee24c227bd5ee0653c7248a0a3ba8977d93" Mar 13 00:37:54.162482 kubelet[2778]: I0313 00:37:54.162468 2778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a9ee74138e5a4c36abe239f2e3363ee24c227bd5ee0653c7248a0a3ba8977d93"} err="failed to get container status \"a9ee74138e5a4c36abe239f2e3363ee24c227bd5ee0653c7248a0a3ba8977d93\": rpc error: code = NotFound desc = an error occurred when try to find container \"a9ee74138e5a4c36abe239f2e3363ee24c227bd5ee0653c7248a0a3ba8977d93\": not found" Mar 13 00:37:54.162482 kubelet[2778]: I0313 00:37:54.162476 2778 scope.go:117] "RemoveContainer" containerID="0e79511dca42ada743b348d583e509659c848600b140287ed54207990b9d4902" Mar 13 00:37:54.162647 containerd[1630]: time="2026-03-13T00:37:54.162554965Z" level=error msg="ContainerStatus for \"0e79511dca42ada743b348d583e509659c848600b140287ed54207990b9d4902\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0e79511dca42ada743b348d583e509659c848600b140287ed54207990b9d4902\": not found" Mar 13 00:37:54.162710 kubelet[2778]: E0313 00:37:54.162699 2778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0e79511dca42ada743b348d583e509659c848600b140287ed54207990b9d4902\": not found" containerID="0e79511dca42ada743b348d583e509659c848600b140287ed54207990b9d4902" Mar 13 00:37:54.162750 kubelet[2778]: I0313 00:37:54.162738 2778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0e79511dca42ada743b348d583e509659c848600b140287ed54207990b9d4902"} err="failed to get container status \"0e79511dca42ada743b348d583e509659c848600b140287ed54207990b9d4902\": rpc error: code = NotFound desc = an error occurred when try to find container \"0e79511dca42ada743b348d583e509659c848600b140287ed54207990b9d4902\": not found" Mar 13 00:37:54.726159 kubelet[2778]: E0313 00:37:54.726100 2778 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 13 00:37:55.125047 sshd[4316]: Connection closed by 4.153.228.146 port 46836 Mar 13 00:37:55.126869 sshd-session[4313]: pam_unix(sshd:session): session closed for user core Mar 13 00:37:55.132490 systemd-logind[1603]: Session 21 logged out. Waiting for processes to exit. Mar 13 00:37:55.133985 systemd[1]: sshd@20-89.167.5.55:22-4.153.228.146:46836.service: Deactivated successfully. Mar 13 00:37:55.137801 systemd[1]: session-21.scope: Deactivated successfully. Mar 13 00:37:55.140259 systemd-logind[1603]: Removed session 21. Mar 13 00:37:55.259448 systemd[1]: Started sshd@21-89.167.5.55:22-4.153.228.146:46838.service - OpenSSH per-connection server daemon (4.153.228.146:46838). Mar 13 00:37:55.637898 kubelet[2778]: I0313 00:37:55.637852 2778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1685953d-e272-42dc-bb87-e44d2fb34ca8" path="/var/lib/kubelet/pods/1685953d-e272-42dc-bb87-e44d2fb34ca8/volumes" Mar 13 00:37:55.639326 kubelet[2778]: I0313 00:37:55.639296 2778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92930507-60a7-45ed-a90c-f0e48f25a207" path="/var/lib/kubelet/pods/92930507-60a7-45ed-a90c-f0e48f25a207/volumes" Mar 13 00:37:55.922448 sshd[4459]: Accepted publickey for core from 4.153.228.146 port 46838 ssh2: RSA SHA256:ihdQa0i/HnNGvKP5m9obD9eorZ8Lhhc0yafWx7ReGkQ Mar 13 00:37:55.923894 sshd-session[4459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:37:55.929397 systemd-logind[1603]: New session 22 of user core. Mar 13 00:37:55.937992 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 13 00:37:56.612178 systemd[1]: Created slice kubepods-burstable-pod5f5ff8cb_c085_477a_aced_804e1f88a56b.slice - libcontainer container kubepods-burstable-pod5f5ff8cb_c085_477a_aced_804e1f88a56b.slice. Mar 13 00:37:56.731070 kubelet[2778]: I0313 00:37:56.730976 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5f5ff8cb-c085-477a-aced-804e1f88a56b-host-proc-sys-kernel\") pod \"cilium-dkm4g\" (UID: \"5f5ff8cb-c085-477a-aced-804e1f88a56b\") " pod="kube-system/cilium-dkm4g" Mar 13 00:37:56.731070 kubelet[2778]: I0313 00:37:56.731055 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5f5ff8cb-c085-477a-aced-804e1f88a56b-cilium-cgroup\") pod \"cilium-dkm4g\" (UID: \"5f5ff8cb-c085-477a-aced-804e1f88a56b\") " pod="kube-system/cilium-dkm4g" Mar 13 00:37:56.731070 kubelet[2778]: I0313 00:37:56.731081 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5f5ff8cb-c085-477a-aced-804e1f88a56b-xtables-lock\") pod \"cilium-dkm4g\" (UID: \"5f5ff8cb-c085-477a-aced-804e1f88a56b\") " pod="kube-system/cilium-dkm4g" Mar 13 00:37:56.732065 kubelet[2778]: I0313 00:37:56.731103 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5f5ff8cb-c085-477a-aced-804e1f88a56b-host-proc-sys-net\") pod \"cilium-dkm4g\" (UID: \"5f5ff8cb-c085-477a-aced-804e1f88a56b\") " pod="kube-system/cilium-dkm4g" Mar 13 00:37:56.732065 kubelet[2778]: I0313 00:37:56.731124 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5f5ff8cb-c085-477a-aced-804e1f88a56b-hubble-tls\") pod \"cilium-dkm4g\" (UID: \"5f5ff8cb-c085-477a-aced-804e1f88a56b\") " pod="kube-system/cilium-dkm4g" Mar 13 00:37:56.732065 kubelet[2778]: I0313 00:37:56.731146 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f5ff8cb-c085-477a-aced-804e1f88a56b-lib-modules\") pod \"cilium-dkm4g\" (UID: \"5f5ff8cb-c085-477a-aced-804e1f88a56b\") " pod="kube-system/cilium-dkm4g" Mar 13 00:37:56.732065 kubelet[2778]: I0313 00:37:56.731167 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5f5ff8cb-c085-477a-aced-804e1f88a56b-cni-path\") pod \"cilium-dkm4g\" (UID: \"5f5ff8cb-c085-477a-aced-804e1f88a56b\") " pod="kube-system/cilium-dkm4g" Mar 13 00:37:56.732065 kubelet[2778]: I0313 00:37:56.731211 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5f5ff8cb-c085-477a-aced-804e1f88a56b-cilium-config-path\") pod \"cilium-dkm4g\" (UID: \"5f5ff8cb-c085-477a-aced-804e1f88a56b\") " pod="kube-system/cilium-dkm4g" Mar 13 00:37:56.732065 kubelet[2778]: I0313 00:37:56.731236 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5f5ff8cb-c085-477a-aced-804e1f88a56b-cilium-run\") pod \"cilium-dkm4g\" (UID: \"5f5ff8cb-c085-477a-aced-804e1f88a56b\") " pod="kube-system/cilium-dkm4g" Mar 13 00:37:56.732311 kubelet[2778]: I0313 00:37:56.731257 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5f5ff8cb-c085-477a-aced-804e1f88a56b-hostproc\") pod \"cilium-dkm4g\" (UID: \"5f5ff8cb-c085-477a-aced-804e1f88a56b\") " pod="kube-system/cilium-dkm4g" Mar 13 00:37:56.732311 kubelet[2778]: I0313 00:37:56.731280 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5f5ff8cb-c085-477a-aced-804e1f88a56b-clustermesh-secrets\") pod \"cilium-dkm4g\" (UID: \"5f5ff8cb-c085-477a-aced-804e1f88a56b\") " pod="kube-system/cilium-dkm4g" Mar 13 00:37:56.732311 kubelet[2778]: I0313 00:37:56.731301 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz8bn\" (UniqueName: \"kubernetes.io/projected/5f5ff8cb-c085-477a-aced-804e1f88a56b-kube-api-access-fz8bn\") pod \"cilium-dkm4g\" (UID: \"5f5ff8cb-c085-477a-aced-804e1f88a56b\") " pod="kube-system/cilium-dkm4g" Mar 13 00:37:56.732311 kubelet[2778]: I0313 00:37:56.731322 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5f5ff8cb-c085-477a-aced-804e1f88a56b-etc-cni-netd\") pod \"cilium-dkm4g\" (UID: \"5f5ff8cb-c085-477a-aced-804e1f88a56b\") " pod="kube-system/cilium-dkm4g" Mar 13 00:37:56.732311 kubelet[2778]: I0313 00:37:56.731343 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5f5ff8cb-c085-477a-aced-804e1f88a56b-bpf-maps\") pod \"cilium-dkm4g\" (UID: \"5f5ff8cb-c085-477a-aced-804e1f88a56b\") " pod="kube-system/cilium-dkm4g" Mar 13 00:37:56.732311 kubelet[2778]: I0313 00:37:56.731365 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5f5ff8cb-c085-477a-aced-804e1f88a56b-cilium-ipsec-secrets\") pod \"cilium-dkm4g\" (UID: \"5f5ff8cb-c085-477a-aced-804e1f88a56b\") " pod="kube-system/cilium-dkm4g" Mar 13 00:37:56.750124 sshd[4462]: Connection closed by 4.153.228.146 port 46838 Mar 13 00:37:56.751030 sshd-session[4459]: pam_unix(sshd:session): session closed for user core Mar 13 00:37:56.759420 systemd-logind[1603]: Session 22 logged out. Waiting for processes to exit. Mar 13 00:37:56.760597 systemd[1]: sshd@21-89.167.5.55:22-4.153.228.146:46838.service: Deactivated successfully. Mar 13 00:37:56.764978 systemd[1]: session-22.scope: Deactivated successfully. Mar 13 00:37:56.769489 systemd-logind[1603]: Removed session 22. Mar 13 00:37:56.882369 systemd[1]: Started sshd@22-89.167.5.55:22-4.153.228.146:46844.service - OpenSSH per-connection server daemon (4.153.228.146:46844). Mar 13 00:37:56.917641 containerd[1630]: time="2026-03-13T00:37:56.917385397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dkm4g,Uid:5f5ff8cb-c085-477a-aced-804e1f88a56b,Namespace:kube-system,Attempt:0,}" Mar 13 00:37:56.935396 containerd[1630]: time="2026-03-13T00:37:56.935342964Z" level=info msg="connecting to shim fc01208c8858b0c2cade431ba21abec8d9c7fb520d6fdc53895cd73f561cf280" address="unix:///run/containerd/s/4a85bc547e528b4c0007fc0fa25ab1e3394c0a0b11e9afb4d9027cbcc4b47fef" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:37:56.962770 systemd[1]: Started cri-containerd-fc01208c8858b0c2cade431ba21abec8d9c7fb520d6fdc53895cd73f561cf280.scope - libcontainer container fc01208c8858b0c2cade431ba21abec8d9c7fb520d6fdc53895cd73f561cf280. Mar 13 00:37:56.983459 containerd[1630]: time="2026-03-13T00:37:56.983387731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dkm4g,Uid:5f5ff8cb-c085-477a-aced-804e1f88a56b,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc01208c8858b0c2cade431ba21abec8d9c7fb520d6fdc53895cd73f561cf280\"" Mar 13 00:37:56.987775 containerd[1630]: time="2026-03-13T00:37:56.987741413Z" level=info msg="CreateContainer within sandbox \"fc01208c8858b0c2cade431ba21abec8d9c7fb520d6fdc53895cd73f561cf280\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 13 00:37:56.993048 containerd[1630]: time="2026-03-13T00:37:56.993011266Z" level=info msg="Container 4d204411bbcf19d2707e41bcd17a437c4bd40045c0aeebad2c18b4189b643340: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:37:56.997075 containerd[1630]: time="2026-03-13T00:37:56.997019357Z" level=info msg="CreateContainer within sandbox \"fc01208c8858b0c2cade431ba21abec8d9c7fb520d6fdc53895cd73f561cf280\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4d204411bbcf19d2707e41bcd17a437c4bd40045c0aeebad2c18b4189b643340\"" Mar 13 00:37:56.997439 containerd[1630]: time="2026-03-13T00:37:56.997419528Z" level=info msg="StartContainer for \"4d204411bbcf19d2707e41bcd17a437c4bd40045c0aeebad2c18b4189b643340\"" Mar 13 00:37:56.997998 containerd[1630]: time="2026-03-13T00:37:56.997968069Z" level=info msg="connecting to shim 4d204411bbcf19d2707e41bcd17a437c4bd40045c0aeebad2c18b4189b643340" address="unix:///run/containerd/s/4a85bc547e528b4c0007fc0fa25ab1e3394c0a0b11e9afb4d9027cbcc4b47fef" protocol=ttrpc version=3 Mar 13 00:37:57.011735 systemd[1]: Started cri-containerd-4d204411bbcf19d2707e41bcd17a437c4bd40045c0aeebad2c18b4189b643340.scope - libcontainer container 4d204411bbcf19d2707e41bcd17a437c4bd40045c0aeebad2c18b4189b643340. Mar 13 00:37:57.036746 containerd[1630]: time="2026-03-13T00:37:57.036711220Z" level=info msg="StartContainer for \"4d204411bbcf19d2707e41bcd17a437c4bd40045c0aeebad2c18b4189b643340\" returns successfully" Mar 13 00:37:57.044441 systemd[1]: cri-containerd-4d204411bbcf19d2707e41bcd17a437c4bd40045c0aeebad2c18b4189b643340.scope: Deactivated successfully. Mar 13 00:37:57.045583 containerd[1630]: time="2026-03-13T00:37:57.045558752Z" level=info msg="received container exit event container_id:\"4d204411bbcf19d2707e41bcd17a437c4bd40045c0aeebad2c18b4189b643340\" id:\"4d204411bbcf19d2707e41bcd17a437c4bd40045c0aeebad2c18b4189b643340\" pid:4539 exited_at:{seconds:1773362277 nanos:45285842}" Mar 13 00:37:57.104137 containerd[1630]: time="2026-03-13T00:37:57.104104533Z" level=info msg="CreateContainer within sandbox \"fc01208c8858b0c2cade431ba21abec8d9c7fb520d6fdc53895cd73f561cf280\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 13 00:37:57.110526 containerd[1630]: time="2026-03-13T00:37:57.110495740Z" level=info msg="Container 168eee6e89b26816aa1f3ae905b2d9e580d40f16b156c3f6c4dce3da69898dec: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:37:57.116700 containerd[1630]: time="2026-03-13T00:37:57.116671636Z" level=info msg="CreateContainer within sandbox \"fc01208c8858b0c2cade431ba21abec8d9c7fb520d6fdc53895cd73f561cf280\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"168eee6e89b26816aa1f3ae905b2d9e580d40f16b156c3f6c4dce3da69898dec\"" Mar 13 00:37:57.117769 containerd[1630]: time="2026-03-13T00:37:57.117737439Z" level=info msg="StartContainer for \"168eee6e89b26816aa1f3ae905b2d9e580d40f16b156c3f6c4dce3da69898dec\"" Mar 13 00:37:57.118565 containerd[1630]: time="2026-03-13T00:37:57.118539631Z" level=info msg="connecting to shim 168eee6e89b26816aa1f3ae905b2d9e580d40f16b156c3f6c4dce3da69898dec" address="unix:///run/containerd/s/4a85bc547e528b4c0007fc0fa25ab1e3394c0a0b11e9afb4d9027cbcc4b47fef" protocol=ttrpc version=3 Mar 13 00:37:57.139848 systemd[1]: Started cri-containerd-168eee6e89b26816aa1f3ae905b2d9e580d40f16b156c3f6c4dce3da69898dec.scope - libcontainer container 168eee6e89b26816aa1f3ae905b2d9e580d40f16b156c3f6c4dce3da69898dec. Mar 13 00:37:57.165997 containerd[1630]: time="2026-03-13T00:37:57.165958093Z" level=info msg="StartContainer for \"168eee6e89b26816aa1f3ae905b2d9e580d40f16b156c3f6c4dce3da69898dec\" returns successfully" Mar 13 00:37:57.171654 systemd[1]: cri-containerd-168eee6e89b26816aa1f3ae905b2d9e580d40f16b156c3f6c4dce3da69898dec.scope: Deactivated successfully. Mar 13 00:37:57.172428 containerd[1630]: time="2026-03-13T00:37:57.172403660Z" level=info msg="received container exit event container_id:\"168eee6e89b26816aa1f3ae905b2d9e580d40f16b156c3f6c4dce3da69898dec\" id:\"168eee6e89b26816aa1f3ae905b2d9e580d40f16b156c3f6c4dce3da69898dec\" pid:4587 exited_at:{seconds:1773362277 nanos:171801518}" Mar 13 00:37:57.518465 sshd[4477]: Accepted publickey for core from 4.153.228.146 port 46844 ssh2: RSA SHA256:ihdQa0i/HnNGvKP5m9obD9eorZ8Lhhc0yafWx7ReGkQ Mar 13 00:37:57.520291 sshd-session[4477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:37:57.526549 systemd-logind[1603]: New session 23 of user core. Mar 13 00:37:57.533785 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 13 00:37:57.881823 sshd[4617]: Connection closed by 4.153.228.146 port 46844 Mar 13 00:37:57.883161 sshd-session[4477]: pam_unix(sshd:session): session closed for user core Mar 13 00:37:57.889014 systemd[1]: sshd@22-89.167.5.55:22-4.153.228.146:46844.service: Deactivated successfully. Mar 13 00:37:57.893009 systemd[1]: session-23.scope: Deactivated successfully. Mar 13 00:37:57.897913 systemd-logind[1603]: Session 23 logged out. Waiting for processes to exit. Mar 13 00:37:57.900480 systemd-logind[1603]: Removed session 23. Mar 13 00:37:58.016723 systemd[1]: Started sshd@23-89.167.5.55:22-4.153.228.146:46860.service - OpenSSH per-connection server daemon (4.153.228.146:46860). Mar 13 00:37:58.115156 containerd[1630]: time="2026-03-13T00:37:58.115082965Z" level=info msg="CreateContainer within sandbox \"fc01208c8858b0c2cade431ba21abec8d9c7fb520d6fdc53895cd73f561cf280\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 13 00:37:58.136658 containerd[1630]: time="2026-03-13T00:37:58.136457359Z" level=info msg="Container f880adb98eaa63045d17df3020d439c04d0f3e55105469a7f1b71185cbac5a2e: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:37:58.145394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3258245005.mount: Deactivated successfully. Mar 13 00:37:58.155409 containerd[1630]: time="2026-03-13T00:37:58.155335426Z" level=info msg="CreateContainer within sandbox \"fc01208c8858b0c2cade431ba21abec8d9c7fb520d6fdc53895cd73f561cf280\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f880adb98eaa63045d17df3020d439c04d0f3e55105469a7f1b71185cbac5a2e\"" Mar 13 00:37:58.157466 containerd[1630]: time="2026-03-13T00:37:58.156171688Z" level=info msg="StartContainer for \"f880adb98eaa63045d17df3020d439c04d0f3e55105469a7f1b71185cbac5a2e\"" Mar 13 00:37:58.159178 containerd[1630]: time="2026-03-13T00:37:58.159127526Z" level=info msg="connecting to shim f880adb98eaa63045d17df3020d439c04d0f3e55105469a7f1b71185cbac5a2e" address="unix:///run/containerd/s/4a85bc547e528b4c0007fc0fa25ab1e3394c0a0b11e9afb4d9027cbcc4b47fef" protocol=ttrpc version=3 Mar 13 00:37:58.182806 systemd[1]: Started cri-containerd-f880adb98eaa63045d17df3020d439c04d0f3e55105469a7f1b71185cbac5a2e.scope - libcontainer container f880adb98eaa63045d17df3020d439c04d0f3e55105469a7f1b71185cbac5a2e. Mar 13 00:37:58.267345 containerd[1630]: time="2026-03-13T00:37:58.267295049Z" level=info msg="StartContainer for \"f880adb98eaa63045d17df3020d439c04d0f3e55105469a7f1b71185cbac5a2e\" returns successfully" Mar 13 00:37:58.269600 systemd[1]: cri-containerd-f880adb98eaa63045d17df3020d439c04d0f3e55105469a7f1b71185cbac5a2e.scope: Deactivated successfully. Mar 13 00:37:58.272215 containerd[1630]: time="2026-03-13T00:37:58.271958561Z" level=info msg="received container exit event container_id:\"f880adb98eaa63045d17df3020d439c04d0f3e55105469a7f1b71185cbac5a2e\" id:\"f880adb98eaa63045d17df3020d439c04d0f3e55105469a7f1b71185cbac5a2e\" pid:4639 exited_at:{seconds:1773362278 nanos:271378989}" Mar 13 00:37:58.289596 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f880adb98eaa63045d17df3020d439c04d0f3e55105469a7f1b71185cbac5a2e-rootfs.mount: Deactivated successfully. Mar 13 00:37:58.681831 sshd[4624]: Accepted publickey for core from 4.153.228.146 port 46860 ssh2: RSA SHA256:ihdQa0i/HnNGvKP5m9obD9eorZ8Lhhc0yafWx7ReGkQ Mar 13 00:37:58.687223 sshd-session[4624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:37:58.700249 systemd-logind[1603]: New session 24 of user core. Mar 13 00:37:58.713007 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 13 00:37:59.122267 containerd[1630]: time="2026-03-13T00:37:59.122096718Z" level=info msg="CreateContainer within sandbox \"fc01208c8858b0c2cade431ba21abec8d9c7fb520d6fdc53895cd73f561cf280\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 13 00:37:59.144695 containerd[1630]: time="2026-03-13T00:37:59.141414175Z" level=info msg="Container 0511738ba60abab7bd4c982a8f028fa65b0b0acae46ce99ce4f63205e42324da: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:37:59.148727 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3475535974.mount: Deactivated successfully. Mar 13 00:37:59.157590 containerd[1630]: time="2026-03-13T00:37:59.157541545Z" level=info msg="CreateContainer within sandbox \"fc01208c8858b0c2cade431ba21abec8d9c7fb520d6fdc53895cd73f561cf280\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0511738ba60abab7bd4c982a8f028fa65b0b0acae46ce99ce4f63205e42324da\"" Mar 13 00:37:59.158235 containerd[1630]: time="2026-03-13T00:37:59.158149466Z" level=info msg="StartContainer for \"0511738ba60abab7bd4c982a8f028fa65b0b0acae46ce99ce4f63205e42324da\"" Mar 13 00:37:59.159342 containerd[1630]: time="2026-03-13T00:37:59.159306049Z" level=info msg="connecting to shim 0511738ba60abab7bd4c982a8f028fa65b0b0acae46ce99ce4f63205e42324da" address="unix:///run/containerd/s/4a85bc547e528b4c0007fc0fa25ab1e3394c0a0b11e9afb4d9027cbcc4b47fef" protocol=ttrpc version=3 Mar 13 00:37:59.183821 systemd[1]: Started cri-containerd-0511738ba60abab7bd4c982a8f028fa65b0b0acae46ce99ce4f63205e42324da.scope - libcontainer container 0511738ba60abab7bd4c982a8f028fa65b0b0acae46ce99ce4f63205e42324da. Mar 13 00:37:59.217232 systemd[1]: cri-containerd-0511738ba60abab7bd4c982a8f028fa65b0b0acae46ce99ce4f63205e42324da.scope: Deactivated successfully. Mar 13 00:37:59.218598 containerd[1630]: time="2026-03-13T00:37:59.218533375Z" level=info msg="received container exit event container_id:\"0511738ba60abab7bd4c982a8f028fa65b0b0acae46ce99ce4f63205e42324da\" id:\"0511738ba60abab7bd4c982a8f028fa65b0b0acae46ce99ce4f63205e42324da\" pid:4688 exited_at:{seconds:1773362279 nanos:217524923}" Mar 13 00:37:59.219402 containerd[1630]: time="2026-03-13T00:37:59.219365888Z" level=info msg="StartContainer for \"0511738ba60abab7bd4c982a8f028fa65b0b0acae46ce99ce4f63205e42324da\" returns successfully" Mar 13 00:37:59.235477 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0511738ba60abab7bd4c982a8f028fa65b0b0acae46ce99ce4f63205e42324da-rootfs.mount: Deactivated successfully. Mar 13 00:37:59.727075 kubelet[2778]: E0313 00:37:59.727035 2778 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 13 00:38:00.131096 containerd[1630]: time="2026-03-13T00:38:00.130533959Z" level=info msg="CreateContainer within sandbox \"fc01208c8858b0c2cade431ba21abec8d9c7fb520d6fdc53895cd73f561cf280\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 13 00:38:00.146852 containerd[1630]: time="2026-03-13T00:38:00.146781749Z" level=info msg="Container 18a84e211610a59ba933da8d221d74892461e0fdd6356211186b8a2022faf451: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:38:00.157571 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4204016843.mount: Deactivated successfully. Mar 13 00:38:00.158312 containerd[1630]: time="2026-03-13T00:38:00.158260166Z" level=info msg="CreateContainer within sandbox \"fc01208c8858b0c2cade431ba21abec8d9c7fb520d6fdc53895cd73f561cf280\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"18a84e211610a59ba933da8d221d74892461e0fdd6356211186b8a2022faf451\"" Mar 13 00:38:00.160166 containerd[1630]: time="2026-03-13T00:38:00.160092021Z" level=info msg="StartContainer for \"18a84e211610a59ba933da8d221d74892461e0fdd6356211186b8a2022faf451\"" Mar 13 00:38:00.161644 containerd[1630]: time="2026-03-13T00:38:00.161457584Z" level=info msg="connecting to shim 18a84e211610a59ba933da8d221d74892461e0fdd6356211186b8a2022faf451" address="unix:///run/containerd/s/4a85bc547e528b4c0007fc0fa25ab1e3394c0a0b11e9afb4d9027cbcc4b47fef" protocol=ttrpc version=3 Mar 13 00:38:00.180775 systemd[1]: Started cri-containerd-18a84e211610a59ba933da8d221d74892461e0fdd6356211186b8a2022faf451.scope - libcontainer container 18a84e211610a59ba933da8d221d74892461e0fdd6356211186b8a2022faf451. Mar 13 00:38:00.244806 containerd[1630]: time="2026-03-13T00:38:00.244772235Z" level=info msg="StartContainer for \"18a84e211610a59ba933da8d221d74892461e0fdd6356211186b8a2022faf451\" returns successfully" Mar 13 00:38:00.578678 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-vaes-avx10_512)) Mar 13 00:38:01.157838 kubelet[2778]: I0313 00:38:01.157395 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dkm4g" podStartSLOduration=5.1573744 podStartE2EDuration="5.1573744s" podCreationTimestamp="2026-03-13 00:37:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:38:01.155595926 +0000 UTC m=+151.601046864" watchObservedRunningTime="2026-03-13 00:38:01.1573744 +0000 UTC m=+151.602825338" Mar 13 00:38:02.962707 kubelet[2778]: I0313 00:38:02.961789 2778 setters.go:543] "Node became not ready" node="ci-4459-2-4-n-a4844b4806" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T00:38:02Z","lastTransitionTime":"2026-03-13T00:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 13 00:38:03.267817 systemd-networkd[1503]: lxc_health: Link UP Mar 13 00:38:03.275480 systemd-networkd[1503]: lxc_health: Gained carrier Mar 13 00:38:05.286759 kubelet[2778]: E0313 00:38:05.286715 2778 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:40764->127.0.0.1:39153: write tcp 127.0.0.1:40764->127.0.0.1:39153: write: broken pipe Mar 13 00:38:05.333852 systemd-networkd[1503]: lxc_health: Gained IPv6LL Mar 13 00:38:11.695690 kubelet[2778]: E0313 00:38:11.695617 2778 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:40784->127.0.0.1:39153: write tcp 127.0.0.1:40784->127.0.0.1:39153: write: broken pipe Mar 13 00:38:11.832669 sshd[4668]: Connection closed by 4.153.228.146 port 46860 Mar 13 00:38:11.834379 sshd-session[4624]: pam_unix(sshd:session): session closed for user core Mar 13 00:38:11.840210 systemd[1]: sshd@23-89.167.5.55:22-4.153.228.146:46860.service: Deactivated successfully. Mar 13 00:38:11.844657 systemd[1]: session-24.scope: Deactivated successfully. Mar 13 00:38:11.846486 systemd-logind[1603]: Session 24 logged out. Waiting for processes to exit. Mar 13 00:38:11.850672 systemd-logind[1603]: Removed session 24. Mar 13 00:38:29.362100 systemd[1]: cri-containerd-aa8f485e6fe5d6a688d7c966ebc7a3ac086521a9c4759dc1f102670e698ab8c3.scope: Deactivated successfully. Mar 13 00:38:29.363895 systemd[1]: cri-containerd-aa8f485e6fe5d6a688d7c966ebc7a3ac086521a9c4759dc1f102670e698ab8c3.scope: Consumed 3.775s CPU time, 59.5M memory peak. Mar 13 00:38:29.367388 containerd[1630]: time="2026-03-13T00:38:29.367287247Z" level=info msg="received container exit event container_id:\"aa8f485e6fe5d6a688d7c966ebc7a3ac086521a9c4759dc1f102670e698ab8c3\" id:\"aa8f485e6fe5d6a688d7c966ebc7a3ac086521a9c4759dc1f102670e698ab8c3\" pid:2627 exit_status:1 exited_at:{seconds:1773362309 nanos:364886273}" Mar 13 00:38:29.411509 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa8f485e6fe5d6a688d7c966ebc7a3ac086521a9c4759dc1f102670e698ab8c3-rootfs.mount: Deactivated successfully. Mar 13 00:38:29.632668 containerd[1630]: time="2026-03-13T00:38:29.632500382Z" level=info msg="StopPodSandbox for \"291a1bf406283c2a8b40059f9a20576a60217981a01b8e500aa9382dd0490483\"" Mar 13 00:38:29.632978 containerd[1630]: time="2026-03-13T00:38:29.632706063Z" level=info msg="TearDown network for sandbox \"291a1bf406283c2a8b40059f9a20576a60217981a01b8e500aa9382dd0490483\" successfully" Mar 13 00:38:29.632978 containerd[1630]: time="2026-03-13T00:38:29.632742693Z" level=info msg="StopPodSandbox for \"291a1bf406283c2a8b40059f9a20576a60217981a01b8e500aa9382dd0490483\" returns successfully" Mar 13 00:38:29.634312 containerd[1630]: time="2026-03-13T00:38:29.633154513Z" level=info msg="RemovePodSandbox for \"291a1bf406283c2a8b40059f9a20576a60217981a01b8e500aa9382dd0490483\"" Mar 13 00:38:29.634312 containerd[1630]: time="2026-03-13T00:38:29.633189893Z" level=info msg="Forcibly stopping sandbox \"291a1bf406283c2a8b40059f9a20576a60217981a01b8e500aa9382dd0490483\"" Mar 13 00:38:29.634312 containerd[1630]: time="2026-03-13T00:38:29.633278394Z" level=info msg="TearDown network for sandbox \"291a1bf406283c2a8b40059f9a20576a60217981a01b8e500aa9382dd0490483\" successfully" Mar 13 00:38:29.637888 containerd[1630]: time="2026-03-13T00:38:29.637820140Z" level=info msg="Ensure that sandbox 291a1bf406283c2a8b40059f9a20576a60217981a01b8e500aa9382dd0490483 in task-service has been cleanup successfully" Mar 13 00:38:29.642688 containerd[1630]: time="2026-03-13T00:38:29.642599907Z" level=info msg="RemovePodSandbox \"291a1bf406283c2a8b40059f9a20576a60217981a01b8e500aa9382dd0490483\" returns successfully" Mar 13 00:38:29.643260 containerd[1630]: time="2026-03-13T00:38:29.643213628Z" level=info msg="StopPodSandbox for \"10d8d7f80d483b860150838edec122c6a660ac3d6ae18559d4cb51fee92fd94c\"" Mar 13 00:38:29.643374 containerd[1630]: time="2026-03-13T00:38:29.643340868Z" level=info msg="TearDown network for sandbox \"10d8d7f80d483b860150838edec122c6a660ac3d6ae18559d4cb51fee92fd94c\" successfully" Mar 13 00:38:29.643374 containerd[1630]: time="2026-03-13T00:38:29.643368518Z" level=info msg="StopPodSandbox for \"10d8d7f80d483b860150838edec122c6a660ac3d6ae18559d4cb51fee92fd94c\" returns successfully" Mar 13 00:38:29.643736 containerd[1630]: time="2026-03-13T00:38:29.643683558Z" level=info msg="RemovePodSandbox for \"10d8d7f80d483b860150838edec122c6a660ac3d6ae18559d4cb51fee92fd94c\"" Mar 13 00:38:29.643806 containerd[1630]: time="2026-03-13T00:38:29.643734588Z" level=info msg="Forcibly stopping sandbox \"10d8d7f80d483b860150838edec122c6a660ac3d6ae18559d4cb51fee92fd94c\"" Mar 13 00:38:29.643858 containerd[1630]: time="2026-03-13T00:38:29.643814149Z" level=info msg="TearDown network for sandbox \"10d8d7f80d483b860150838edec122c6a660ac3d6ae18559d4cb51fee92fd94c\" successfully" Mar 13 00:38:29.646333 containerd[1630]: time="2026-03-13T00:38:29.646284792Z" level=info msg="Ensure that sandbox 10d8d7f80d483b860150838edec122c6a660ac3d6ae18559d4cb51fee92fd94c in task-service has been cleanup successfully" Mar 13 00:38:29.649978 containerd[1630]: time="2026-03-13T00:38:29.649914927Z" level=info msg="RemovePodSandbox \"10d8d7f80d483b860150838edec122c6a660ac3d6ae18559d4cb51fee92fd94c\" returns successfully" Mar 13 00:38:29.660276 kubelet[2778]: E0313 00:38:29.659984 2778 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:34824->10.0.0.2:2379: read: connection timed out" Mar 13 00:38:30.210063 kubelet[2778]: I0313 00:38:30.209977 2778 scope.go:117] "RemoveContainer" containerID="aa8f485e6fe5d6a688d7c966ebc7a3ac086521a9c4759dc1f102670e698ab8c3" Mar 13 00:38:30.213144 containerd[1630]: time="2026-03-13T00:38:30.213070062Z" level=info msg="CreateContainer within sandbox \"33c3ecbc668c6cc2c242b6aa7f1f4b68ab4ec5c40b17780a669fc82479f18ee5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 13 00:38:30.227749 containerd[1630]: time="2026-03-13T00:38:30.225789910Z" level=info msg="Container d02a2396858b4373940f45ff36671055e016db112576b677441a6e8d455b7d0a: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:38:30.238777 containerd[1630]: time="2026-03-13T00:38:30.238707468Z" level=info msg="CreateContainer within sandbox \"33c3ecbc668c6cc2c242b6aa7f1f4b68ab4ec5c40b17780a669fc82479f18ee5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"d02a2396858b4373940f45ff36671055e016db112576b677441a6e8d455b7d0a\"" Mar 13 00:38:30.239440 containerd[1630]: time="2026-03-13T00:38:30.239409319Z" level=info msg="StartContainer for \"d02a2396858b4373940f45ff36671055e016db112576b677441a6e8d455b7d0a\"" Mar 13 00:38:30.241304 containerd[1630]: time="2026-03-13T00:38:30.241243052Z" level=info msg="connecting to shim d02a2396858b4373940f45ff36671055e016db112576b677441a6e8d455b7d0a" address="unix:///run/containerd/s/b260159bd06b7984706251d776cdb64031ccc907edcc86bc8d2e82bf8b539b2e" protocol=ttrpc version=3 Mar 13 00:38:30.279924 systemd[1]: Started cri-containerd-d02a2396858b4373940f45ff36671055e016db112576b677441a6e8d455b7d0a.scope - libcontainer container d02a2396858b4373940f45ff36671055e016db112576b677441a6e8d455b7d0a. Mar 13 00:38:30.344671 containerd[1630]: time="2026-03-13T00:38:30.344599766Z" level=info msg="StartContainer for \"d02a2396858b4373940f45ff36671055e016db112576b677441a6e8d455b7d0a\" returns successfully" Mar 13 00:38:35.160865 kubelet[2778]: E0313 00:38:35.160565 2778 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:34432->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4459-2-4-n-a4844b4806.189c3f9ed523ca00 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4459-2-4-n-a4844b4806,UID:0a08690ab476469eb4e6e03a39e0faa4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4459-2-4-n-a4844b4806,},FirstTimestamp:2026-03-13 00:38:24.690407936 +0000 UTC m=+175.135858884,LastTimestamp:2026-03-13 00:38:24.690407936 +0000 UTC m=+175.135858884,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-4-n-a4844b4806,}" Mar 13 00:38:35.199830 systemd[1]: cri-containerd-6d620b46a1434e3c52fa927497db9f27bfaa39f148644225ee20ffcd26ebb677.scope: Deactivated successfully. Mar 13 00:38:35.200358 systemd[1]: cri-containerd-6d620b46a1434e3c52fa927497db9f27bfaa39f148644225ee20ffcd26ebb677.scope: Consumed 2.760s CPU time, 21.1M memory peak. Mar 13 00:38:35.206370 containerd[1630]: time="2026-03-13T00:38:35.206209020Z" level=info msg="received container exit event container_id:\"6d620b46a1434e3c52fa927497db9f27bfaa39f148644225ee20ffcd26ebb677\" id:\"6d620b46a1434e3c52fa927497db9f27bfaa39f148644225ee20ffcd26ebb677\" pid:2633 exit_status:1 exited_at:{seconds:1773362315 nanos:204928158}" Mar 13 00:38:35.249466 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d620b46a1434e3c52fa927497db9f27bfaa39f148644225ee20ffcd26ebb677-rootfs.mount: Deactivated successfully. Mar 13 00:38:36.226869 kubelet[2778]: I0313 00:38:36.226694 2778 scope.go:117] "RemoveContainer" containerID="6d620b46a1434e3c52fa927497db9f27bfaa39f148644225ee20ffcd26ebb677" Mar 13 00:38:36.229444 containerd[1630]: time="2026-03-13T00:38:36.229017290Z" level=info msg="CreateContainer within sandbox \"7c1f6cf2e2dbf727102ee45d3697d8ef7538d933d1a9f16bf490c5d16418919b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 13 00:38:36.245010 containerd[1630]: time="2026-03-13T00:38:36.244956951Z" level=info msg="Container e927b326b36a8248e1bff9a8eab683a54c5ef5467f11120e36f0b76aadb8061b: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:38:36.254497 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4281972522.mount: Deactivated successfully. Mar 13 00:38:36.259142 containerd[1630]: time="2026-03-13T00:38:36.259081649Z" level=info msg="CreateContainer within sandbox \"7c1f6cf2e2dbf727102ee45d3697d8ef7538d933d1a9f16bf490c5d16418919b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"e927b326b36a8248e1bff9a8eab683a54c5ef5467f11120e36f0b76aadb8061b\"" Mar 13 00:38:36.260649 containerd[1630]: time="2026-03-13T00:38:36.259728970Z" level=info msg="StartContainer for \"e927b326b36a8248e1bff9a8eab683a54c5ef5467f11120e36f0b76aadb8061b\"" Mar 13 00:38:36.260649 containerd[1630]: time="2026-03-13T00:38:36.260452711Z" level=info msg="connecting to shim e927b326b36a8248e1bff9a8eab683a54c5ef5467f11120e36f0b76aadb8061b" address="unix:///run/containerd/s/a96c2c624a18ef9936fbf8a6758508b466be5d6c34b3da3ddb6dcf77b124375f" protocol=ttrpc version=3 Mar 13 00:38:36.282742 systemd[1]: Started cri-containerd-e927b326b36a8248e1bff9a8eab683a54c5ef5467f11120e36f0b76aadb8061b.scope - libcontainer container e927b326b36a8248e1bff9a8eab683a54c5ef5467f11120e36f0b76aadb8061b. Mar 13 00:38:36.326060 containerd[1630]: time="2026-03-13T00:38:36.326016665Z" level=info msg="StartContainer for \"e927b326b36a8248e1bff9a8eab683a54c5ef5467f11120e36f0b76aadb8061b\" returns successfully" Mar 13 00:38:39.661463 kubelet[2778]: E0313 00:38:39.661366 2778 controller.go:195] "Failed to update lease" err="Put \"https://89.167.5.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-a4844b4806?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 00:38:49.663141 kubelet[2778]: E0313 00:38:49.663050 2778 controller.go:195] "Failed to update lease" err="Put \"https://89.167.5.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-a4844b4806?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"