Apr 16 23:57:08.854934 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Apr 16 22:00:21 -00 2026 Apr 16 23:57:08.854951 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=f73cf1d40ab12c6181d739932b2133dbe986804f7665fccb580a411e6eed38d9 Apr 16 23:57:08.854958 kernel: BIOS-provided physical RAM map: Apr 16 23:57:08.854963 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 16 23:57:08.854970 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ed3efff] usable Apr 16 23:57:08.854975 kernel: BIOS-e820: [mem 0x000000007ed3f000-0x000000007edfffff] reserved Apr 16 23:57:08.854980 kernel: BIOS-e820: [mem 0x000000007ee00000-0x000000007f8ecfff] usable Apr 16 23:57:08.854985 kernel: BIOS-e820: [mem 0x000000007f8ed000-0x000000007fb6cfff] reserved Apr 16 23:57:08.854990 kernel: BIOS-e820: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Apr 16 23:57:08.854994 kernel: BIOS-e820: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Apr 16 23:57:08.854999 kernel: BIOS-e820: [mem 0x000000007fbff000-0x000000007ff7bfff] usable Apr 16 23:57:08.855004 kernel: BIOS-e820: [mem 0x000000007ff7c000-0x000000007fffffff] reserved Apr 16 23:57:08.855008 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 16 23:57:08.855016 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 16 23:57:08.855022 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Apr 16 23:57:08.855027 kernel: BIOS-e820: [mem 0x0000000100000000-0x0000000179ffffff] usable Apr 16 23:57:08.855031 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 16 23:57:08.855036 kernel: NX (Execute Disable) protection: active Apr 16 23:57:08.855044 kernel: APIC: Static calls initialized Apr 16 23:57:08.855049 kernel: e820: update [mem 0x7dfab018-0x7dfb4a57] usable ==> usable Apr 16 23:57:08.855054 kernel: e820: update [mem 0x7df6f018-0x7dfaa657] usable ==> usable Apr 16 23:57:08.855059 kernel: e820: update [mem 0x7dc01018-0x7dc3c657] usable ==> usable Apr 16 23:57:08.855063 kernel: extended physical RAM map: Apr 16 23:57:08.855068 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 16 23:57:08.855073 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000007dc01017] usable Apr 16 23:57:08.855078 kernel: reserve setup_data: [mem 0x000000007dc01018-0x000000007dc3c657] usable Apr 16 23:57:08.855083 kernel: reserve setup_data: [mem 0x000000007dc3c658-0x000000007df6f017] usable Apr 16 23:57:08.855088 kernel: reserve setup_data: [mem 0x000000007df6f018-0x000000007dfaa657] usable Apr 16 23:57:08.855092 kernel: reserve setup_data: [mem 0x000000007dfaa658-0x000000007dfab017] usable Apr 16 23:57:08.855100 kernel: reserve setup_data: [mem 0x000000007dfab018-0x000000007dfb4a57] usable Apr 16 23:57:08.855105 kernel: reserve setup_data: [mem 0x000000007dfb4a58-0x000000007ed3efff] usable Apr 16 23:57:08.855123 kernel: reserve setup_data: [mem 0x000000007ed3f000-0x000000007edfffff] reserved Apr 16 23:57:08.855128 kernel: reserve setup_data: [mem 0x000000007ee00000-0x000000007f8ecfff] usable Apr 16 23:57:08.855133 kernel: reserve setup_data: [mem 0x000000007f8ed000-0x000000007fb6cfff] reserved Apr 16 23:57:08.855138 kernel: reserve setup_data: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Apr 16 23:57:08.855143 kernel: reserve setup_data: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Apr 16 23:57:08.855148 kernel: reserve setup_data: [mem 0x000000007fbff000-0x000000007ff7bfff] usable Apr 16 23:57:08.855153 kernel: reserve setup_data: [mem 0x000000007ff7c000-0x000000007fffffff] reserved Apr 16 23:57:08.855157 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 16 23:57:08.855162 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 16 23:57:08.855172 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Apr 16 23:57:08.855177 kernel: reserve setup_data: [mem 0x0000000100000000-0x0000000179ffffff] usable Apr 16 23:57:08.855182 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 16 23:57:08.855187 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Apr 16 23:57:08.855193 kernel: efi: SMBIOS=0x7f988000 SMBIOS 3.0=0x7f986000 ACPI=0x7fb7e000 ACPI 2.0=0x7fb7e014 MEMATTR=0x7e01b198 RNG=0x7fb73018 Apr 16 23:57:08.855200 kernel: random: crng init done Apr 16 23:57:08.855206 kernel: efi: Remove mem137: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Apr 16 23:57:08.855211 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Apr 16 23:57:08.855216 kernel: secureboot: Secure boot disabled Apr 16 23:57:08.855221 kernel: SMBIOS 3.0.0 present. Apr 16 23:57:08.855226 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Apr 16 23:57:08.855231 kernel: DMI: Memory slots populated: 1/1 Apr 16 23:57:08.855236 kernel: Hypervisor detected: KVM Apr 16 23:57:08.855241 kernel: last_pfn = 0x7ff7c max_arch_pfn = 0x10000000000 Apr 16 23:57:08.855246 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 16 23:57:08.855251 kernel: kvm-clock: using sched offset of 13444165155 cycles Apr 16 23:57:08.855258 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 16 23:57:08.855264 kernel: tsc: Detected 2400.000 MHz processor Apr 16 23:57:08.855269 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 16 23:57:08.855275 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 16 23:57:08.855280 kernel: last_pfn = 0x17a000 max_arch_pfn = 0x10000000000 Apr 16 23:57:08.855285 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 16 23:57:08.855290 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 16 23:57:08.855295 kernel: last_pfn = 0x7ff7c max_arch_pfn = 0x10000000000 Apr 16 23:57:08.855301 kernel: Using GB pages for direct mapping Apr 16 23:57:08.855308 kernel: ACPI: Early table checksum verification disabled Apr 16 23:57:08.855313 kernel: ACPI: RSDP 0x000000007FB7E014 000024 (v02 BOCHS ) Apr 16 23:57:08.855318 kernel: ACPI: XSDT 0x000000007FB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 16 23:57:08.855323 kernel: ACPI: FACP 0x000000007FB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 23:57:08.855329 kernel: ACPI: DSDT 0x000000007FB7A000 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 23:57:08.855334 kernel: ACPI: FACS 0x000000007FBDD000 000040 Apr 16 23:57:08.855339 kernel: ACPI: APIC 0x000000007FB78000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 23:57:08.855344 kernel: ACPI: HPET 0x000000007FB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 23:57:08.855349 kernel: ACPI: MCFG 0x000000007FB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 23:57:08.855357 kernel: ACPI: WAET 0x000000007FB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 23:57:08.855362 kernel: ACPI: BGRT 0x000000007FB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 16 23:57:08.855367 kernel: ACPI: Reserving FACP table memory at [mem 0x7fb79000-0x7fb790f3] Apr 16 23:57:08.855372 kernel: ACPI: Reserving DSDT table memory at [mem 0x7fb7a000-0x7fb7c442] Apr 16 23:57:08.855377 kernel: ACPI: Reserving FACS table memory at [mem 0x7fbdd000-0x7fbdd03f] Apr 16 23:57:08.855382 kernel: ACPI: Reserving APIC table memory at [mem 0x7fb78000-0x7fb7807f] Apr 16 23:57:08.855387 kernel: ACPI: Reserving HPET table memory at [mem 0x7fb77000-0x7fb77037] Apr 16 23:57:08.855392 kernel: ACPI: Reserving MCFG table memory at [mem 0x7fb76000-0x7fb7603b] Apr 16 23:57:08.855398 kernel: ACPI: Reserving WAET table memory at [mem 0x7fb75000-0x7fb75027] Apr 16 23:57:08.855405 kernel: ACPI: Reserving BGRT table memory at [mem 0x7fb74000-0x7fb74037] Apr 16 23:57:08.855410 kernel: No NUMA configuration found Apr 16 23:57:08.855415 kernel: Faking a node at [mem 0x0000000000000000-0x0000000179ffffff] Apr 16 23:57:08.855421 kernel: NODE_DATA(0) allocated [mem 0x179ff8dc0-0x179ffffff] Apr 16 23:57:08.855426 kernel: Zone ranges: Apr 16 23:57:08.855431 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 16 23:57:08.855436 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 16 23:57:08.855442 kernel: Normal [mem 0x0000000100000000-0x0000000179ffffff] Apr 16 23:57:08.855447 kernel: Device empty Apr 16 23:57:08.855454 kernel: Movable zone start for each node Apr 16 23:57:08.855459 kernel: Early memory node ranges Apr 16 23:57:08.855464 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 16 23:57:08.855469 kernel: node 0: [mem 0x0000000000100000-0x000000007ed3efff] Apr 16 23:57:08.855475 kernel: node 0: [mem 0x000000007ee00000-0x000000007f8ecfff] Apr 16 23:57:08.855480 kernel: node 0: [mem 0x000000007fbff000-0x000000007ff7bfff] Apr 16 23:57:08.855485 kernel: node 0: [mem 0x0000000100000000-0x0000000179ffffff] Apr 16 23:57:08.855490 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x0000000179ffffff] Apr 16 23:57:08.855495 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 16 23:57:08.855500 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 16 23:57:08.855508 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Apr 16 23:57:08.855513 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Apr 16 23:57:08.855518 kernel: On node 0, zone Normal: 132 pages in unavailable ranges Apr 16 23:57:08.855523 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Apr 16 23:57:08.855528 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 16 23:57:08.855533 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 16 23:57:08.855539 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 16 23:57:08.855544 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 16 23:57:08.855549 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 16 23:57:08.855556 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 16 23:57:08.855561 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 16 23:57:08.855567 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 16 23:57:08.855572 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 16 23:57:08.855577 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 16 23:57:08.855582 kernel: CPU topo: Max. logical packages: 1 Apr 16 23:57:08.855587 kernel: CPU topo: Max. logical dies: 1 Apr 16 23:57:08.855600 kernel: CPU topo: Max. dies per package: 1 Apr 16 23:57:08.855606 kernel: CPU topo: Max. threads per core: 1 Apr 16 23:57:08.855611 kernel: CPU topo: Num. cores per package: 2 Apr 16 23:57:08.855616 kernel: CPU topo: Num. threads per package: 2 Apr 16 23:57:08.855622 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Apr 16 23:57:08.855629 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 16 23:57:08.855634 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Apr 16 23:57:08.855640 kernel: Booting paravirtualized kernel on KVM Apr 16 23:57:08.855645 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 16 23:57:08.855651 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 16 23:57:08.855658 kernel: percpu: Embedded 60 pages/cpu s207448 r8192 d30120 u1048576 Apr 16 23:57:08.855663 kernel: pcpu-alloc: s207448 r8192 d30120 u1048576 alloc=1*2097152 Apr 16 23:57:08.855669 kernel: pcpu-alloc: [0] 0 1 Apr 16 23:57:08.855674 kernel: kvm-guest: PV spinlocks disabled, no host support Apr 16 23:57:08.855680 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=f73cf1d40ab12c6181d739932b2133dbe986804f7665fccb580a411e6eed38d9 Apr 16 23:57:08.855685 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 16 23:57:08.855691 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 16 23:57:08.855696 kernel: Fallback order for Node 0: 0 Apr 16 23:57:08.855704 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1022792 Apr 16 23:57:08.855709 kernel: Policy zone: Normal Apr 16 23:57:08.855714 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 16 23:57:08.855720 kernel: software IO TLB: area num 2. Apr 16 23:57:08.855725 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 16 23:57:08.855731 kernel: ftrace: allocating 40126 entries in 157 pages Apr 16 23:57:08.855736 kernel: ftrace: allocated 157 pages with 5 groups Apr 16 23:57:08.855741 kernel: Dynamic Preempt: voluntary Apr 16 23:57:08.855746 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 16 23:57:08.855759 kernel: rcu: RCU event tracing is enabled. Apr 16 23:57:08.855764 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 16 23:57:08.855770 kernel: Trampoline variant of Tasks RCU enabled. Apr 16 23:57:08.855775 kernel: Rude variant of Tasks RCU enabled. Apr 16 23:57:08.855781 kernel: Tracing variant of Tasks RCU enabled. Apr 16 23:57:08.855786 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 16 23:57:08.855791 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 16 23:57:08.855797 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 16 23:57:08.855802 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 16 23:57:08.855808 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 16 23:57:08.855815 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 16 23:57:08.855821 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 16 23:57:08.855826 kernel: Console: colour dummy device 80x25 Apr 16 23:57:08.855832 kernel: printk: legacy console [tty0] enabled Apr 16 23:57:08.855837 kernel: printk: legacy console [ttyS0] enabled Apr 16 23:57:08.855842 kernel: ACPI: Core revision 20240827 Apr 16 23:57:08.855848 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 16 23:57:08.855853 kernel: APIC: Switch to symmetric I/O mode setup Apr 16 23:57:08.855858 kernel: x2apic enabled Apr 16 23:57:08.855866 kernel: APIC: Switched APIC routing to: physical x2apic Apr 16 23:57:08.855879 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 16 23:57:08.855885 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x22983777dd9, max_idle_ns: 440795300422 ns Apr 16 23:57:08.855890 kernel: Calibrating delay loop (skipped) preset value.. 4800.00 BogoMIPS (lpj=2400000) Apr 16 23:57:08.855895 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 16 23:57:08.855901 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 16 23:57:08.855906 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 16 23:57:08.855912 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 16 23:57:08.855919 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Apr 16 23:57:08.855925 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 16 23:57:08.855930 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 16 23:57:08.855936 kernel: active return thunk: srso_alias_return_thunk Apr 16 23:57:08.855941 kernel: Speculative Return Stack Overflow: Mitigation: Safe RET Apr 16 23:57:08.855947 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Apr 16 23:57:08.855952 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Apr 16 23:57:08.855957 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 16 23:57:08.855963 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 16 23:57:08.855970 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 16 23:57:08.855976 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 16 23:57:08.855981 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 16 23:57:08.855987 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 16 23:57:08.855993 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 16 23:57:08.855998 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 16 23:57:08.856003 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 16 23:57:08.856009 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 16 23:57:08.856014 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 16 23:57:08.856021 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Apr 16 23:57:08.856027 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Apr 16 23:57:08.856032 kernel: Freeing SMP alternatives memory: 32K Apr 16 23:57:08.856038 kernel: pid_max: default: 32768 minimum: 301 Apr 16 23:57:08.856043 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 16 23:57:08.856048 kernel: landlock: Up and running. Apr 16 23:57:08.856054 kernel: SELinux: Initializing. Apr 16 23:57:08.856059 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 16 23:57:08.856064 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 16 23:57:08.856072 kernel: smpboot: CPU0: AMD EPYC-Genoa Processor (family: 0x19, model: 0x11, stepping: 0x0) Apr 16 23:57:08.856077 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 16 23:57:08.856083 kernel: ... version: 0 Apr 16 23:57:08.856088 kernel: ... bit width: 48 Apr 16 23:57:08.856093 kernel: ... generic registers: 6 Apr 16 23:57:08.856099 kernel: ... value mask: 0000ffffffffffff Apr 16 23:57:08.856104 kernel: ... max period: 00007fffffffffff Apr 16 23:57:08.856122 kernel: ... fixed-purpose events: 0 Apr 16 23:57:08.856128 kernel: ... event mask: 000000000000003f Apr 16 23:57:08.856135 kernel: signal: max sigframe size: 3376 Apr 16 23:57:08.856140 kernel: rcu: Hierarchical SRCU implementation. Apr 16 23:57:08.856146 kernel: rcu: Max phase no-delay instances is 400. Apr 16 23:57:08.856151 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 16 23:57:08.856157 kernel: smp: Bringing up secondary CPUs ... Apr 16 23:57:08.856162 kernel: smpboot: x86: Booting SMP configuration: Apr 16 23:57:08.856167 kernel: .... node #0, CPUs: #1 Apr 16 23:57:08.856173 kernel: smp: Brought up 1 node, 2 CPUs Apr 16 23:57:08.856178 kernel: smpboot: Total of 2 processors activated (9600.00 BogoMIPS) Apr 16 23:57:08.856186 kernel: Memory: 3813624K/4091168K available (14336K kernel code, 2453K rwdata, 26076K rodata, 46216K init, 2532K bss, 271900K reserved, 0K cma-reserved) Apr 16 23:57:08.856191 kernel: devtmpfs: initialized Apr 16 23:57:08.856197 kernel: x86/mm: Memory block size: 128MB Apr 16 23:57:08.856202 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7fb7f000-0x7fbfefff] (524288 bytes) Apr 16 23:57:08.856208 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 16 23:57:08.856213 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 16 23:57:08.856218 kernel: pinctrl core: initialized pinctrl subsystem Apr 16 23:57:08.856224 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 16 23:57:08.856229 kernel: audit: initializing netlink subsys (disabled) Apr 16 23:57:08.856237 kernel: audit: type=2000 audit(1776383825.563:1): state=initialized audit_enabled=0 res=1 Apr 16 23:57:08.856242 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 16 23:57:08.856247 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 16 23:57:08.856253 kernel: cpuidle: using governor menu Apr 16 23:57:08.856258 kernel: efi: Freeing EFI boot services memory: 34884K Apr 16 23:57:08.856263 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 16 23:57:08.856269 kernel: dca service started, version 1.12.1 Apr 16 23:57:08.856274 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Apr 16 23:57:08.856280 kernel: PCI: Using configuration type 1 for base access Apr 16 23:57:08.856287 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 16 23:57:08.856293 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 16 23:57:08.856298 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 16 23:57:08.856304 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 16 23:57:08.856309 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 16 23:57:08.856314 kernel: ACPI: Added _OSI(Module Device) Apr 16 23:57:08.856320 kernel: ACPI: Added _OSI(Processor Device) Apr 16 23:57:08.856325 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 16 23:57:08.856330 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 16 23:57:08.856337 kernel: ACPI: Interpreter enabled Apr 16 23:57:08.856343 kernel: ACPI: PM: (supports S0 S5) Apr 16 23:57:08.856348 kernel: ACPI: Using IOAPIC for interrupt routing Apr 16 23:57:08.856354 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 16 23:57:08.856359 kernel: PCI: Using E820 reservations for host bridge windows Apr 16 23:57:08.856364 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 16 23:57:08.856370 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 16 23:57:08.856522 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 16 23:57:08.856628 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 16 23:57:08.856726 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 16 23:57:08.856733 kernel: PCI host bridge to bus 0000:00 Apr 16 23:57:08.856832 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 16 23:57:08.856930 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 16 23:57:08.857018 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 16 23:57:08.857105 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xdfffffff window] Apr 16 23:57:08.857234 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Apr 16 23:57:08.857329 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc7ffffffff window] Apr 16 23:57:08.857417 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 16 23:57:08.857530 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Apr 16 23:57:08.857655 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Apr 16 23:57:08.857757 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80000000-0x807fffff pref] Apr 16 23:57:08.857857 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc060500000-0xc060503fff 64bit pref] Apr 16 23:57:08.857965 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8138a000-0x8138afff] Apr 16 23:57:08.858061 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Apr 16 23:57:08.858186 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 16 23:57:08.858291 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:57:08.858387 kernel: pci 0000:00:02.0: BAR 0 [mem 0x81389000-0x81389fff] Apr 16 23:57:08.858483 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 16 23:57:08.858581 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Apr 16 23:57:08.858676 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Apr 16 23:57:08.858777 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:57:08.858881 kernel: pci 0000:00:02.1: BAR 0 [mem 0x81388000-0x81388fff] Apr 16 23:57:08.858977 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 16 23:57:08.859072 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Apr 16 23:57:08.861197 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:57:08.861311 kernel: pci 0000:00:02.2: BAR 0 [mem 0x81387000-0x81387fff] Apr 16 23:57:08.861412 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 16 23:57:08.861508 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Apr 16 23:57:08.861604 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Apr 16 23:57:08.861709 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:57:08.861807 kernel: pci 0000:00:02.3: BAR 0 [mem 0x81386000-0x81386fff] Apr 16 23:57:08.861911 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 16 23:57:08.862010 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Apr 16 23:57:08.862137 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:57:08.862238 kernel: pci 0000:00:02.4: BAR 0 [mem 0x81385000-0x81385fff] Apr 16 23:57:08.862333 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 16 23:57:08.862430 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Apr 16 23:57:08.862525 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Apr 16 23:57:08.862625 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:57:08.862725 kernel: pci 0000:00:02.5: BAR 0 [mem 0x81384000-0x81384fff] Apr 16 23:57:08.862820 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 16 23:57:08.862922 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Apr 16 23:57:08.863017 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Apr 16 23:57:08.863133 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:57:08.863231 kernel: pci 0000:00:02.6: BAR 0 [mem 0x81383000-0x81383fff] Apr 16 23:57:08.863329 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 16 23:57:08.863427 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Apr 16 23:57:08.863522 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Apr 16 23:57:08.863644 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:57:08.863751 kernel: pci 0000:00:02.7: BAR 0 [mem 0x81382000-0x81382fff] Apr 16 23:57:08.863847 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 16 23:57:08.863950 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Apr 16 23:57:08.864048 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Apr 16 23:57:08.864199 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:57:08.864297 kernel: pci 0000:00:03.0: BAR 0 [mem 0x81381000-0x81381fff] Apr 16 23:57:08.864392 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 16 23:57:08.864486 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Apr 16 23:57:08.864582 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Apr 16 23:57:08.864683 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Apr 16 23:57:08.864781 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 16 23:57:08.864892 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Apr 16 23:57:08.864989 kernel: pci 0000:00:1f.2: BAR 4 [io 0x6040-0x605f] Apr 16 23:57:08.865084 kernel: pci 0000:00:1f.2: BAR 5 [mem 0x81380000-0x81380fff] Apr 16 23:57:08.865211 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Apr 16 23:57:08.865307 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6000-0x603f] Apr 16 23:57:08.865416 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Apr 16 23:57:08.865521 kernel: pci 0000:01:00.0: BAR 1 [mem 0x81200000-0x81200fff] Apr 16 23:57:08.865622 kernel: pci 0000:01:00.0: BAR 4 [mem 0xc060000000-0xc060003fff 64bit pref] Apr 16 23:57:08.865722 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref] Apr 16 23:57:08.865818 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 16 23:57:08.865933 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Apr 16 23:57:08.866033 kernel: pci 0000:02:00.0: BAR 0 [mem 0x81100000-0x81103fff 64bit] Apr 16 23:57:08.866153 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 16 23:57:08.866265 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 PCIe Endpoint Apr 16 23:57:08.866365 kernel: pci 0000:03:00.0: BAR 1 [mem 0x81000000-0x81000fff] Apr 16 23:57:08.866464 kernel: pci 0000:03:00.0: BAR 4 [mem 0xc060100000-0xc060103fff 64bit pref] Apr 16 23:57:08.866560 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 16 23:57:08.866666 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Apr 16 23:57:08.866767 kernel: pci 0000:04:00.0: BAR 4 [mem 0xc060200000-0xc060203fff 64bit pref] Apr 16 23:57:08.866865 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 16 23:57:08.866982 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Apr 16 23:57:08.867082 kernel: pci 0000:05:00.0: BAR 1 [mem 0x80f00000-0x80f00fff] Apr 16 23:57:08.867211 kernel: pci 0000:05:00.0: BAR 4 [mem 0xc060300000-0xc060303fff 64bit pref] Apr 16 23:57:08.867308 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 16 23:57:08.867414 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 PCIe Endpoint Apr 16 23:57:08.867515 kernel: pci 0000:06:00.0: BAR 1 [mem 0x80e00000-0x80e00fff] Apr 16 23:57:08.867618 kernel: pci 0000:06:00.0: BAR 4 [mem 0xc060400000-0xc060403fff 64bit pref] Apr 16 23:57:08.867714 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 16 23:57:08.867721 kernel: acpiphp: Slot [0] registered Apr 16 23:57:08.867826 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Apr 16 23:57:08.867935 kernel: pci 0000:07:00.0: BAR 1 [mem 0x80c00000-0x80c00fff] Apr 16 23:57:08.868035 kernel: pci 0000:07:00.0: BAR 4 [mem 0xc000000000-0xc000003fff 64bit pref] Apr 16 23:57:08.868361 kernel: pci 0000:07:00.0: ROM [mem 0xfff80000-0xffffffff pref] Apr 16 23:57:08.868469 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 16 23:57:08.868476 kernel: acpiphp: Slot [0-2] registered Apr 16 23:57:08.868572 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 16 23:57:08.868579 kernel: acpiphp: Slot [0-3] registered Apr 16 23:57:08.868675 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 16 23:57:08.868699 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 16 23:57:08.868704 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 16 23:57:08.868710 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 16 23:57:08.868718 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 16 23:57:08.868723 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 16 23:57:08.868731 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 16 23:57:08.868736 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 16 23:57:08.868742 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 16 23:57:08.868748 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 16 23:57:08.868753 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 16 23:57:08.868759 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 16 23:57:08.868765 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 16 23:57:08.868773 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 16 23:57:08.868778 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 16 23:57:08.868787 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 16 23:57:08.868792 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 16 23:57:08.868798 kernel: iommu: Default domain type: Translated Apr 16 23:57:08.868804 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 16 23:57:08.868812 kernel: efivars: Registered efivars operations Apr 16 23:57:08.868817 kernel: PCI: Using ACPI for IRQ routing Apr 16 23:57:08.868823 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 16 23:57:08.868829 kernel: e820: reserve RAM buffer [mem 0x7dc01018-0x7fffffff] Apr 16 23:57:08.868834 kernel: e820: reserve RAM buffer [mem 0x7df6f018-0x7fffffff] Apr 16 23:57:08.868840 kernel: e820: reserve RAM buffer [mem 0x7dfab018-0x7fffffff] Apr 16 23:57:08.868845 kernel: e820: reserve RAM buffer [mem 0x7ed3f000-0x7fffffff] Apr 16 23:57:08.868851 kernel: e820: reserve RAM buffer [mem 0x7f8ed000-0x7fffffff] Apr 16 23:57:08.868857 kernel: e820: reserve RAM buffer [mem 0x7ff7c000-0x7fffffff] Apr 16 23:57:08.868865 kernel: e820: reserve RAM buffer [mem 0x17a000000-0x17bffffff] Apr 16 23:57:08.868970 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 16 23:57:08.869066 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 16 23:57:08.869178 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 16 23:57:08.869185 kernel: vgaarb: loaded Apr 16 23:57:08.869191 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 16 23:57:08.869197 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 16 23:57:08.869203 kernel: clocksource: Switched to clocksource kvm-clock Apr 16 23:57:08.869211 kernel: VFS: Disk quotas dquot_6.6.0 Apr 16 23:57:08.869217 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 16 23:57:08.869223 kernel: pnp: PnP ACPI init Apr 16 23:57:08.869329 kernel: system 00:04: [mem 0xe0000000-0xefffffff window] has been reserved Apr 16 23:57:08.869337 kernel: pnp: PnP ACPI: found 5 devices Apr 16 23:57:08.869343 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 16 23:57:08.869349 kernel: NET: Registered PF_INET protocol family Apr 16 23:57:08.869355 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 16 23:57:08.869360 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 16 23:57:08.869369 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 16 23:57:08.869375 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 16 23:57:08.869380 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 16 23:57:08.869386 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 16 23:57:08.869392 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 16 23:57:08.869397 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 16 23:57:08.869403 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 16 23:57:08.869409 kernel: NET: Registered PF_XDP protocol family Apr 16 23:57:08.869515 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref]: can't claim; no compatible bridge window Apr 16 23:57:08.869618 kernel: pci 0000:07:00.0: ROM [mem 0xfff80000-0xffffffff pref]: can't claim; no compatible bridge window Apr 16 23:57:08.869743 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 16 23:57:08.869851 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 16 23:57:08.869960 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 16 23:57:08.870443 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff]: assigned Apr 16 23:57:08.870548 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff]: assigned Apr 16 23:57:08.870645 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff]: assigned Apr 16 23:57:08.870752 kernel: pci 0000:01:00.0: ROM [mem 0x81280000-0x812fffff pref]: assigned Apr 16 23:57:08.870848 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 16 23:57:08.870955 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Apr 16 23:57:08.871051 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Apr 16 23:57:08.871218 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 16 23:57:08.871317 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Apr 16 23:57:08.872194 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 16 23:57:08.872300 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Apr 16 23:57:08.872396 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Apr 16 23:57:08.872496 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 16 23:57:08.872594 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Apr 16 23:57:08.872692 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 16 23:57:08.872787 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Apr 16 23:57:08.872891 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Apr 16 23:57:08.872988 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 16 23:57:08.873083 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Apr 16 23:57:08.873211 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Apr 16 23:57:08.873314 kernel: pci 0000:07:00.0: ROM [mem 0x80c80000-0x80cfffff pref]: assigned Apr 16 23:57:08.873413 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 16 23:57:08.873508 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Apr 16 23:57:08.873605 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Apr 16 23:57:08.873701 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Apr 16 23:57:08.873796 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 16 23:57:08.873903 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Apr 16 23:57:08.873998 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Apr 16 23:57:08.874093 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Apr 16 23:57:08.874843 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 16 23:57:08.874956 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Apr 16 23:57:08.875052 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Apr 16 23:57:08.876181 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Apr 16 23:57:08.876281 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 16 23:57:08.876372 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 16 23:57:08.876477 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 16 23:57:08.876567 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xdfffffff window] Apr 16 23:57:08.876656 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Apr 16 23:57:08.876744 kernel: pci_bus 0000:00: resource 9 [mem 0xc000000000-0xc7ffffffff window] Apr 16 23:57:08.876846 kernel: pci_bus 0000:01: resource 1 [mem 0x81200000-0x812fffff] Apr 16 23:57:08.876949 kernel: pci_bus 0000:01: resource 2 [mem 0xc060000000-0xc0600fffff 64bit pref] Apr 16 23:57:08.877052 kernel: pci_bus 0000:02: resource 1 [mem 0x81100000-0x811fffff] Apr 16 23:57:08.877181 kernel: pci_bus 0000:03: resource 1 [mem 0x81000000-0x810fffff] Apr 16 23:57:08.877276 kernel: pci_bus 0000:03: resource 2 [mem 0xc060100000-0xc0601fffff 64bit pref] Apr 16 23:57:08.877377 kernel: pci_bus 0000:04: resource 2 [mem 0xc060200000-0xc0602fffff 64bit pref] Apr 16 23:57:08.877476 kernel: pci_bus 0000:05: resource 1 [mem 0x80f00000-0x80ffffff] Apr 16 23:57:08.877570 kernel: pci_bus 0000:05: resource 2 [mem 0xc060300000-0xc0603fffff 64bit pref] Apr 16 23:57:08.877670 kernel: pci_bus 0000:06: resource 1 [mem 0x80e00000-0x80efffff] Apr 16 23:57:08.877766 kernel: pci_bus 0000:06: resource 2 [mem 0xc060400000-0xc0604fffff 64bit pref] Apr 16 23:57:08.877865 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Apr 16 23:57:08.877966 kernel: pci_bus 0000:07: resource 1 [mem 0x80c00000-0x80dfffff] Apr 16 23:57:08.878059 kernel: pci_bus 0000:07: resource 2 [mem 0xc000000000-0xc01fffffff 64bit pref] Apr 16 23:57:08.878803 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Apr 16 23:57:08.878918 kernel: pci_bus 0000:08: resource 1 [mem 0x80a00000-0x80bfffff] Apr 16 23:57:08.879014 kernel: pci_bus 0000:08: resource 2 [mem 0xc020000000-0xc03fffffff 64bit pref] Apr 16 23:57:08.879147 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Apr 16 23:57:08.879247 kernel: pci_bus 0000:09: resource 1 [mem 0x80800000-0x809fffff] Apr 16 23:57:08.879404 kernel: pci_bus 0000:09: resource 2 [mem 0xc040000000-0xc05fffffff 64bit pref] Apr 16 23:57:08.879414 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 16 23:57:08.879420 kernel: PCI: CLS 0 bytes, default 64 Apr 16 23:57:08.879426 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 16 23:57:08.879432 kernel: software IO TLB: mapped [mem 0x0000000077ffd000-0x000000007bffd000] (64MB) Apr 16 23:57:08.879441 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x22983777dd9, max_idle_ns: 440795300422 ns Apr 16 23:57:08.879447 kernel: Initialise system trusted keyrings Apr 16 23:57:08.879453 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 16 23:57:08.879459 kernel: Key type asymmetric registered Apr 16 23:57:08.879464 kernel: Asymmetric key parser 'x509' registered Apr 16 23:57:08.879470 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 16 23:57:08.879476 kernel: io scheduler mq-deadline registered Apr 16 23:57:08.879482 kernel: io scheduler kyber registered Apr 16 23:57:08.879488 kernel: io scheduler bfq registered Apr 16 23:57:08.879589 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Apr 16 23:57:08.879688 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Apr 16 23:57:08.879785 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Apr 16 23:57:08.879894 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Apr 16 23:57:08.879990 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Apr 16 23:57:08.880087 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Apr 16 23:57:08.881228 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Apr 16 23:57:08.881336 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Apr 16 23:57:08.881434 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Apr 16 23:57:08.881534 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Apr 16 23:57:08.881632 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Apr 16 23:57:08.881728 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Apr 16 23:57:08.881824 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Apr 16 23:57:08.881929 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Apr 16 23:57:08.882026 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Apr 16 23:57:08.882499 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Apr 16 23:57:08.882513 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 16 23:57:08.882615 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Apr 16 23:57:08.882713 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Apr 16 23:57:08.882720 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 16 23:57:08.882726 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Apr 16 23:57:08.882732 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 16 23:57:08.882738 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 16 23:57:08.882747 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 16 23:57:08.882752 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 16 23:57:08.882758 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 16 23:57:08.882862 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 16 23:57:08.882879 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 16 23:57:08.882971 kernel: rtc_cmos 00:03: registered as rtc0 Apr 16 23:57:08.883063 kernel: rtc_cmos 00:03: setting system clock to 2026-04-16T23:57:08 UTC (1776383828) Apr 16 23:57:08.883171 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 16 23:57:08.883179 kernel: amd_pstate: The CPPC feature is supported but currently disabled by the BIOS. Please enable it if your BIOS has the CPPC option. Apr 16 23:57:08.883186 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 16 23:57:08.883191 kernel: efifb: probing for efifb Apr 16 23:57:08.883197 kernel: efifb: framebuffer at 0x80000000, using 4000k, total 4000k Apr 16 23:57:08.883203 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Apr 16 23:57:08.883209 kernel: efifb: scrolling: redraw Apr 16 23:57:08.883214 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 16 23:57:08.883220 kernel: Console: switching to colour frame buffer device 160x50 Apr 16 23:57:08.883229 kernel: fb0: EFI VGA frame buffer device Apr 16 23:57:08.883234 kernel: pstore: Using crash dump compression: deflate Apr 16 23:57:08.883240 kernel: pstore: Registered efi_pstore as persistent store backend Apr 16 23:57:08.883246 kernel: NET: Registered PF_INET6 protocol family Apr 16 23:57:08.883251 kernel: Segment Routing with IPv6 Apr 16 23:57:08.883257 kernel: In-situ OAM (IOAM) with IPv6 Apr 16 23:57:08.883263 kernel: NET: Registered PF_PACKET protocol family Apr 16 23:57:08.883268 kernel: Key type dns_resolver registered Apr 16 23:57:08.883274 kernel: IPI shorthand broadcast: enabled Apr 16 23:57:08.883282 kernel: sched_clock: Marking stable (2823010850, 267902000)->(3143919370, -53006520) Apr 16 23:57:08.883288 kernel: registered taskstats version 1 Apr 16 23:57:08.884144 kernel: Loading compiled-in X.509 certificates Apr 16 23:57:08.884154 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: 92f69eed5a22c94634d5240e5e65306547d4ba83' Apr 16 23:57:08.884161 kernel: Demotion targets for Node 0: null Apr 16 23:57:08.884167 kernel: Key type .fscrypt registered Apr 16 23:57:08.884173 kernel: Key type fscrypt-provisioning registered Apr 16 23:57:08.884178 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 16 23:57:08.884184 kernel: ima: Allocated hash algorithm: sha1 Apr 16 23:57:08.884194 kernel: ima: No architecture policies found Apr 16 23:57:08.884200 kernel: clk: Disabling unused clocks Apr 16 23:57:08.884206 kernel: Warning: unable to open an initial console. Apr 16 23:57:08.884212 kernel: Freeing unused kernel image (initmem) memory: 46216K Apr 16 23:57:08.884218 kernel: Write protecting the kernel read-only data: 40960k Apr 16 23:57:08.884224 kernel: Freeing unused kernel image (rodata/data gap) memory: 548K Apr 16 23:57:08.884230 kernel: Run /init as init process Apr 16 23:57:08.884235 kernel: with arguments: Apr 16 23:57:08.884241 kernel: /init Apr 16 23:57:08.884249 kernel: with environment: Apr 16 23:57:08.884255 kernel: HOME=/ Apr 16 23:57:08.884261 kernel: TERM=linux Apr 16 23:57:08.884268 systemd[1]: Successfully made /usr/ read-only. Apr 16 23:57:08.884276 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 16 23:57:08.884283 systemd[1]: Detected virtualization kvm. Apr 16 23:57:08.884289 systemd[1]: Detected architecture x86-64. Apr 16 23:57:08.884297 systemd[1]: Running in initrd. Apr 16 23:57:08.884303 systemd[1]: No hostname configured, using default hostname. Apr 16 23:57:08.884310 systemd[1]: Hostname set to . Apr 16 23:57:08.884316 systemd[1]: Initializing machine ID from VM UUID. Apr 16 23:57:08.884322 systemd[1]: Queued start job for default target initrd.target. Apr 16 23:57:08.884328 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 23:57:08.884334 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 23:57:08.884341 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 16 23:57:08.884349 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 23:57:08.884355 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 16 23:57:08.884362 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 16 23:57:08.884369 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 16 23:57:08.884375 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 16 23:57:08.884381 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 23:57:08.884387 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 23:57:08.884395 systemd[1]: Reached target paths.target - Path Units. Apr 16 23:57:08.884401 systemd[1]: Reached target slices.target - Slice Units. Apr 16 23:57:08.884407 systemd[1]: Reached target swap.target - Swaps. Apr 16 23:57:08.884413 systemd[1]: Reached target timers.target - Timer Units. Apr 16 23:57:08.884419 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 23:57:08.884425 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 23:57:08.884431 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 16 23:57:08.884437 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 16 23:57:08.884443 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 23:57:08.884451 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 23:57:08.884457 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 23:57:08.884463 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 23:57:08.884469 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 16 23:57:08.884475 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 23:57:08.884481 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 16 23:57:08.884488 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 16 23:57:08.884494 systemd[1]: Starting systemd-fsck-usr.service... Apr 16 23:57:08.884502 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 23:57:08.884508 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 23:57:08.884535 systemd-journald[200]: Collecting audit messages is disabled. Apr 16 23:57:08.884550 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 23:57:08.884559 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 16 23:57:08.884565 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 23:57:08.884571 systemd[1]: Finished systemd-fsck-usr.service. Apr 16 23:57:08.884578 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 16 23:57:08.884585 systemd-journald[200]: Journal started Apr 16 23:57:08.884601 systemd-journald[200]: Runtime Journal (/run/log/journal/5b31e3e9938848f597aa3aed9d6ee35c) is 8M, max 76.1M, 68.1M free. Apr 16 23:57:08.884149 systemd-modules-load[202]: Inserted module 'overlay' Apr 16 23:57:08.896137 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 23:57:08.896165 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:57:08.902579 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 16 23:57:08.910125 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 16 23:57:08.906201 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 23:57:08.917188 kernel: Bridge firewalling registered Apr 16 23:57:08.913200 systemd-modules-load[202]: Inserted module 'br_netfilter' Apr 16 23:57:08.916132 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 23:57:08.922306 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 23:57:08.925223 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 23:57:08.926628 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 23:57:08.930053 systemd-tmpfiles[217]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 16 23:57:08.938940 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 23:57:08.941396 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 23:57:08.944232 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 16 23:57:08.955468 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 23:57:08.956642 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 23:57:08.960234 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 23:57:08.966165 dracut-cmdline[235]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=f73cf1d40ab12c6181d739932b2133dbe986804f7665fccb580a411e6eed38d9 Apr 16 23:57:08.992255 systemd-resolved[242]: Positive Trust Anchors: Apr 16 23:57:08.992814 systemd-resolved[242]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 23:57:08.993307 systemd-resolved[242]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 23:57:08.996978 systemd-resolved[242]: Defaulting to hostname 'linux'. Apr 16 23:57:08.998269 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 23:57:08.999136 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 23:57:09.040151 kernel: SCSI subsystem initialized Apr 16 23:57:09.048131 kernel: Loading iSCSI transport class v2.0-870. Apr 16 23:57:09.056145 kernel: iscsi: registered transport (tcp) Apr 16 23:57:09.073513 kernel: iscsi: registered transport (qla4xxx) Apr 16 23:57:09.073552 kernel: QLogic iSCSI HBA Driver Apr 16 23:57:09.088814 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 16 23:57:09.101219 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 23:57:09.103140 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 16 23:57:09.139699 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 16 23:57:09.141207 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 16 23:57:09.186139 kernel: raid6: avx512x4 gen() 44478 MB/s Apr 16 23:57:09.204136 kernel: raid6: avx512x2 gen() 46119 MB/s Apr 16 23:57:09.222141 kernel: raid6: avx512x1 gen() 42732 MB/s Apr 16 23:57:09.240130 kernel: raid6: avx2x4 gen() 44715 MB/s Apr 16 23:57:09.258131 kernel: raid6: avx2x2 gen() 48643 MB/s Apr 16 23:57:09.276326 kernel: raid6: avx2x1 gen() 21127 MB/s Apr 16 23:57:09.276386 kernel: raid6: using algorithm avx2x2 gen() 48643 MB/s Apr 16 23:57:09.297264 kernel: raid6: .... xor() 29464 MB/s, rmw enabled Apr 16 23:57:09.297309 kernel: raid6: using avx512x2 recovery algorithm Apr 16 23:57:09.313171 kernel: xor: automatically using best checksumming function avx Apr 16 23:57:09.447166 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 16 23:57:09.457175 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 16 23:57:09.459102 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 23:57:09.501829 systemd-udevd[448]: Using default interface naming scheme 'v255'. Apr 16 23:57:09.506944 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 23:57:09.512661 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 16 23:57:09.548997 dracut-pre-trigger[458]: rd.md=0: removing MD RAID activation Apr 16 23:57:09.580276 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 23:57:09.581978 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 23:57:09.662159 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 23:57:09.668347 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 16 23:57:09.747156 kernel: virtio_scsi virtio5: 2/0/0 default/read/poll queues Apr 16 23:57:09.751231 kernel: cryptd: max_cpu_qlen set to 1000 Apr 16 23:57:09.759132 kernel: scsi host0: Virtio SCSI HBA Apr 16 23:57:09.759319 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 16 23:57:09.769176 kernel: libata version 3.00 loaded. Apr 16 23:57:09.797435 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 16 23:57:09.818615 kernel: AES CTR mode by8 optimization enabled Apr 16 23:57:09.816626 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 23:57:09.816724 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:57:09.819434 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 23:57:09.820417 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 23:57:09.830779 kernel: ACPI: bus type USB registered Apr 16 23:57:09.830825 kernel: usbcore: registered new interface driver usbfs Apr 16 23:57:09.831001 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 23:57:09.884240 kernel: usbcore: registered new interface driver hub Apr 16 23:57:09.884311 kernel: usbcore: registered new device driver usb Apr 16 23:57:09.884324 kernel: sd 0:0:0:0: Power-on or device reset occurred Apr 16 23:57:09.884763 kernel: sd 0:0:0:0: [sda] 160006144 512-byte logical blocks: (81.9 GB/76.3 GiB) Apr 16 23:57:09.886514 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 16 23:57:09.886663 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Apr 16 23:57:09.886800 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 16 23:57:09.886944 kernel: ahci 0000:00:1f.2: version 3.0 Apr 16 23:57:09.887071 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 16 23:57:09.887081 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Apr 16 23:57:09.888832 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Apr 16 23:57:09.888960 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 16 23:57:09.889072 kernel: scsi host1: ahci Apr 16 23:57:09.896691 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 16 23:57:09.896705 kernel: GPT:17805311 != 160006143 Apr 16 23:57:09.896714 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 16 23:57:09.896727 kernel: GPT:17805311 != 160006143 Apr 16 23:57:09.896735 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 16 23:57:09.896743 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 16 23:57:09.896751 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 16 23:57:09.896910 kernel: scsi host2: ahci Apr 16 23:57:09.897508 kernel: scsi host3: ahci Apr 16 23:57:09.897633 kernel: scsi host4: ahci Apr 16 23:57:09.897751 kernel: scsi host5: ahci Apr 16 23:57:09.897864 kernel: scsi host6: ahci Apr 16 23:57:09.897988 kernel: ata1: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380100 irq 48 lpm-pol 1 Apr 16 23:57:09.897996 kernel: ata2: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380180 irq 48 lpm-pol 1 Apr 16 23:57:09.898004 kernel: ata3: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380200 irq 48 lpm-pol 1 Apr 16 23:57:09.898012 kernel: ata4: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380280 irq 48 lpm-pol 1 Apr 16 23:57:09.898020 kernel: ata5: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380300 irq 48 lpm-pol 1 Apr 16 23:57:09.898031 kernel: ata6: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380380 irq 48 lpm-pol 1 Apr 16 23:57:09.834940 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:57:09.877420 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 16 23:57:09.895032 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 23:57:09.937313 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 16 23:57:09.938453 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:57:09.956573 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 16 23:57:09.966134 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 16 23:57:09.966463 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 16 23:57:09.975438 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 16 23:57:09.977090 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 16 23:57:10.003134 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 16 23:57:10.003431 disk-uuid[639]: Primary Header is updated. Apr 16 23:57:10.003431 disk-uuid[639]: Secondary Entries is updated. Apr 16 23:57:10.003431 disk-uuid[639]: Secondary Header is updated. Apr 16 23:57:10.213432 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 16 23:57:10.213526 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 16 23:57:10.214148 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 16 23:57:10.224176 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 16 23:57:10.224243 kernel: ata1.00: LPM support broken, forcing max_power Apr 16 23:57:10.229924 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 16 23:57:10.235043 kernel: ata1.00: applying bridge limits Apr 16 23:57:10.243324 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 16 23:57:10.243370 kernel: ata1.00: LPM support broken, forcing max_power Apr 16 23:57:10.247546 kernel: ata1.00: configured for UDMA/100 Apr 16 23:57:10.254749 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 16 23:57:10.262280 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 16 23:57:10.301537 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 16 23:57:10.301963 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 16 23:57:10.306216 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 16 23:57:10.310440 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 16 23:57:10.310625 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 16 23:57:10.311468 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 16 23:57:10.313209 kernel: hub 1-0:1.0: USB hub found Apr 16 23:57:10.315598 kernel: hub 1-0:1.0: 4 ports detected Apr 16 23:57:10.318259 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 16 23:57:10.319701 kernel: hub 2-0:1.0: USB hub found Apr 16 23:57:10.320032 kernel: hub 2-0:1.0: 4 ports detected Apr 16 23:57:10.340729 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 16 23:57:10.340951 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 16 23:57:10.356409 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Apr 16 23:57:10.555229 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 16 23:57:10.639374 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 16 23:57:10.641471 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 23:57:10.642874 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 23:57:10.643577 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 23:57:10.646445 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 16 23:57:10.682664 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 16 23:57:10.701154 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 16 23:57:10.710602 kernel: usbcore: registered new interface driver usbhid Apr 16 23:57:10.710625 kernel: usbhid: USB HID core driver Apr 16 23:57:10.722132 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input4 Apr 16 23:57:10.729157 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 16 23:57:11.028027 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 16 23:57:11.029472 disk-uuid[640]: The operation has completed successfully. Apr 16 23:57:11.109518 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 16 23:57:11.109613 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 16 23:57:11.128733 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 16 23:57:11.144746 sh[685]: Success Apr 16 23:57:11.161738 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 16 23:57:11.161804 kernel: device-mapper: uevent: version 1.0.3 Apr 16 23:57:11.162504 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 16 23:57:11.174136 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 16 23:57:11.223905 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 16 23:57:11.225643 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 16 23:57:11.240265 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 16 23:57:11.250133 kernel: BTRFS: device fsid d1542dca-1171-4bcf-9aae-d85dd05fe503 devid 1 transid 32 /dev/mapper/usr (254:0) scanned by mount (697) Apr 16 23:57:11.250164 kernel: BTRFS info (device dm-0): first mount of filesystem d1542dca-1171-4bcf-9aae-d85dd05fe503 Apr 16 23:57:11.255740 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 16 23:57:11.269764 kernel: BTRFS info (device dm-0 state E): enabling ssd optimizations Apr 16 23:57:11.269812 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 16 23:57:11.269840 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 16 23:57:11.273713 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 16 23:57:11.275374 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 16 23:57:11.276562 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 16 23:57:11.277779 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 16 23:57:11.281067 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 16 23:57:11.306508 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (726) Apr 16 23:57:11.306560 kernel: BTRFS info (device sda6): first mount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 16 23:57:11.308891 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 23:57:11.317142 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 16 23:57:11.317170 kernel: BTRFS info (device sda6): turning on async discard Apr 16 23:57:11.317181 kernel: BTRFS info (device sda6): enabling free space tree Apr 16 23:57:11.324164 kernel: BTRFS info (device sda6): last unmount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 16 23:57:11.324820 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 16 23:57:11.326537 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 16 23:57:11.398062 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 23:57:11.401173 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 23:57:11.412842 ignition[788]: Ignition 2.22.0 Apr 16 23:57:11.412853 ignition[788]: Stage: fetch-offline Apr 16 23:57:11.412883 ignition[788]: no configs at "/usr/lib/ignition/base.d" Apr 16 23:57:11.412890 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 16 23:57:11.415043 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 23:57:11.412968 ignition[788]: parsed url from cmdline: "" Apr 16 23:57:11.412972 ignition[788]: no config URL provided Apr 16 23:57:11.412976 ignition[788]: reading system config file "/usr/lib/ignition/user.ign" Apr 16 23:57:11.412983 ignition[788]: no config at "/usr/lib/ignition/user.ign" Apr 16 23:57:11.412987 ignition[788]: failed to fetch config: resource requires networking Apr 16 23:57:11.413104 ignition[788]: Ignition finished successfully Apr 16 23:57:11.436203 systemd-networkd[872]: lo: Link UP Apr 16 23:57:11.436211 systemd-networkd[872]: lo: Gained carrier Apr 16 23:57:11.438589 systemd-networkd[872]: Enumeration completed Apr 16 23:57:11.439027 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 23:57:11.439338 systemd-networkd[872]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:57:11.439342 systemd-networkd[872]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 23:57:11.439413 systemd[1]: Reached target network.target - Network. Apr 16 23:57:11.440261 systemd-networkd[872]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:57:11.440265 systemd-networkd[872]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 23:57:11.440619 systemd-networkd[872]: eth0: Link UP Apr 16 23:57:11.440755 systemd-networkd[872]: eth1: Link UP Apr 16 23:57:11.440909 systemd-networkd[872]: eth0: Gained carrier Apr 16 23:57:11.440917 systemd-networkd[872]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:57:11.442333 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 16 23:57:11.445583 systemd-networkd[872]: eth1: Gained carrier Apr 16 23:57:11.445593 systemd-networkd[872]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:57:11.462361 ignition[876]: Ignition 2.22.0 Apr 16 23:57:11.462371 ignition[876]: Stage: fetch Apr 16 23:57:11.462470 ignition[876]: no configs at "/usr/lib/ignition/base.d" Apr 16 23:57:11.462478 ignition[876]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 16 23:57:11.462537 ignition[876]: parsed url from cmdline: "" Apr 16 23:57:11.462541 ignition[876]: no config URL provided Apr 16 23:57:11.462545 ignition[876]: reading system config file "/usr/lib/ignition/user.ign" Apr 16 23:57:11.462552 ignition[876]: no config at "/usr/lib/ignition/user.ign" Apr 16 23:57:11.462651 ignition[876]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 16 23:57:11.462773 ignition[876]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 16 23:57:11.482152 systemd-networkd[872]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 16 23:57:11.498144 systemd-networkd[872]: eth0: DHCPv4 address 77.42.22.14/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 16 23:57:11.663826 ignition[876]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 16 23:57:11.674343 ignition[876]: GET result: OK Apr 16 23:57:11.675277 ignition[876]: parsing config with SHA512: b66d37c603b0899fcd13d214353440ac946991d7998a571e1c40d4a83bb26f52be53f96ff4d70276b9975eb83aa12fb8e475d97dc2fa3437b8728639b0eb8a7a Apr 16 23:57:11.686158 unknown[876]: fetched base config from "system" Apr 16 23:57:11.686729 ignition[876]: fetch: fetch complete Apr 16 23:57:11.686187 unknown[876]: fetched base config from "system" Apr 16 23:57:11.686745 ignition[876]: fetch: fetch passed Apr 16 23:57:11.686204 unknown[876]: fetched user config from "hetzner" Apr 16 23:57:11.686847 ignition[876]: Ignition finished successfully Apr 16 23:57:11.693677 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 16 23:57:11.697484 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 16 23:57:11.753012 ignition[884]: Ignition 2.22.0 Apr 16 23:57:11.753036 ignition[884]: Stage: kargs Apr 16 23:57:11.753264 ignition[884]: no configs at "/usr/lib/ignition/base.d" Apr 16 23:57:11.753283 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 16 23:57:11.754427 ignition[884]: kargs: kargs passed Apr 16 23:57:11.754496 ignition[884]: Ignition finished successfully Apr 16 23:57:11.758035 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 16 23:57:11.760219 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 16 23:57:11.806010 ignition[890]: Ignition 2.22.0 Apr 16 23:57:11.806030 ignition[890]: Stage: disks Apr 16 23:57:11.806235 ignition[890]: no configs at "/usr/lib/ignition/base.d" Apr 16 23:57:11.806250 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 16 23:57:11.807293 ignition[890]: disks: disks passed Apr 16 23:57:11.807357 ignition[890]: Ignition finished successfully Apr 16 23:57:11.810627 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 16 23:57:11.812517 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 16 23:57:11.813816 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 16 23:57:11.814519 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 23:57:11.815641 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 23:57:11.816747 systemd[1]: Reached target basic.target - Basic System. Apr 16 23:57:11.819372 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 16 23:57:11.853973 systemd-fsck[898]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Apr 16 23:57:11.858418 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 16 23:57:11.862258 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 16 23:57:11.979132 kernel: EXT4-fs (sda9): mounted filesystem ee420a69-62b9-42f4-84c7-ea3f2d87c569 r/w with ordered data mode. Quota mode: none. Apr 16 23:57:11.980317 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 16 23:57:11.982189 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 16 23:57:11.985435 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 23:57:11.988558 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 16 23:57:11.995851 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 16 23:57:11.996568 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 16 23:57:11.997229 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 23:57:11.998886 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 16 23:57:12.002215 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 16 23:57:12.008132 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (906) Apr 16 23:57:12.013687 kernel: BTRFS info (device sda6): first mount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 16 23:57:12.013708 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 23:57:12.020517 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 16 23:57:12.020548 kernel: BTRFS info (device sda6): turning on async discard Apr 16 23:57:12.024367 kernel: BTRFS info (device sda6): enabling free space tree Apr 16 23:57:12.027136 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 23:57:12.051163 coreos-metadata[908]: Apr 16 23:57:12.051 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 16 23:57:12.051163 coreos-metadata[908]: Apr 16 23:57:12.051 INFO Fetch successful Apr 16 23:57:12.052756 coreos-metadata[908]: Apr 16 23:57:12.052 INFO wrote hostname ci-4459-2-4-n-3f94367fd3 to /sysroot/etc/hostname Apr 16 23:57:12.054349 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 16 23:57:12.056030 initrd-setup-root[933]: cut: /sysroot/etc/passwd: No such file or directory Apr 16 23:57:12.059976 initrd-setup-root[941]: cut: /sysroot/etc/group: No such file or directory Apr 16 23:57:12.064096 initrd-setup-root[948]: cut: /sysroot/etc/shadow: No such file or directory Apr 16 23:57:12.067128 initrd-setup-root[955]: cut: /sysroot/etc/gshadow: No such file or directory Apr 16 23:57:12.143291 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 16 23:57:12.144926 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 16 23:57:12.146159 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 16 23:57:12.159138 kernel: BTRFS info (device sda6): last unmount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 16 23:57:12.171329 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 16 23:57:12.183487 ignition[1024]: INFO : Ignition 2.22.0 Apr 16 23:57:12.183487 ignition[1024]: INFO : Stage: mount Apr 16 23:57:12.184360 ignition[1024]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 23:57:12.184360 ignition[1024]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 16 23:57:12.184360 ignition[1024]: INFO : mount: mount passed Apr 16 23:57:12.184360 ignition[1024]: INFO : Ignition finished successfully Apr 16 23:57:12.186482 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 16 23:57:12.188014 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 16 23:57:12.248602 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 16 23:57:12.250982 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 23:57:12.280165 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1035) Apr 16 23:57:12.286351 kernel: BTRFS info (device sda6): first mount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 16 23:57:12.286395 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 23:57:12.303592 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 16 23:57:12.303650 kernel: BTRFS info (device sda6): turning on async discard Apr 16 23:57:12.303672 kernel: BTRFS info (device sda6): enabling free space tree Apr 16 23:57:12.311817 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 23:57:12.355300 ignition[1052]: INFO : Ignition 2.22.0 Apr 16 23:57:12.355300 ignition[1052]: INFO : Stage: files Apr 16 23:57:12.358249 ignition[1052]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 23:57:12.358249 ignition[1052]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 16 23:57:12.358249 ignition[1052]: DEBUG : files: compiled without relabeling support, skipping Apr 16 23:57:12.361011 ignition[1052]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 16 23:57:12.361011 ignition[1052]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 16 23:57:12.363281 ignition[1052]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 16 23:57:12.363281 ignition[1052]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 16 23:57:12.365044 ignition[1052]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 16 23:57:12.363357 unknown[1052]: wrote ssh authorized keys file for user: core Apr 16 23:57:12.367291 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 16 23:57:12.367291 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 16 23:57:12.631489 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 16 23:57:12.949402 systemd-networkd[872]: eth1: Gained IPv6LL Apr 16 23:57:13.003278 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 16 23:57:13.005536 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 16 23:57:13.005536 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 16 23:57:13.283848 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 16 23:57:13.333354 systemd-networkd[872]: eth0: Gained IPv6LL Apr 16 23:57:13.393803 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 16 23:57:13.395400 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 16 23:57:13.395400 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 16 23:57:13.395400 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 16 23:57:13.395400 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 16 23:57:13.395400 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 23:57:13.395400 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 23:57:13.395400 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 23:57:13.395400 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 23:57:13.395400 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 23:57:13.395400 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 23:57:13.395400 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 16 23:57:13.395400 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 16 23:57:13.405291 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 16 23:57:13.405291 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 16 23:57:13.712554 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 16 23:57:13.975464 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 16 23:57:13.975464 ignition[1052]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 16 23:57:13.978174 ignition[1052]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 23:57:13.978174 ignition[1052]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 23:57:13.978174 ignition[1052]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 16 23:57:13.978174 ignition[1052]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Apr 16 23:57:13.978174 ignition[1052]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 16 23:57:13.978174 ignition[1052]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 16 23:57:13.978174 ignition[1052]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Apr 16 23:57:13.978174 ignition[1052]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Apr 16 23:57:13.978174 ignition[1052]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Apr 16 23:57:13.978174 ignition[1052]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 16 23:57:13.978174 ignition[1052]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 16 23:57:13.978174 ignition[1052]: INFO : files: files passed Apr 16 23:57:13.978174 ignition[1052]: INFO : Ignition finished successfully Apr 16 23:57:13.982192 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 16 23:57:13.986242 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 16 23:57:13.988198 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 16 23:57:13.998488 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 16 23:57:13.999184 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 16 23:57:14.009983 initrd-setup-root-after-ignition[1082]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 23:57:14.010893 initrd-setup-root-after-ignition[1082]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 16 23:57:14.012065 initrd-setup-root-after-ignition[1086]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 23:57:14.013910 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 23:57:14.014584 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 16 23:57:14.017208 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 16 23:57:14.060025 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 16 23:57:14.060626 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 16 23:57:14.061654 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 16 23:57:14.062108 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 16 23:57:14.062959 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 16 23:57:14.064224 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 16 23:57:14.078318 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 23:57:14.080150 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 16 23:57:14.091905 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 16 23:57:14.092372 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 23:57:14.092786 systemd[1]: Stopped target timers.target - Timer Units. Apr 16 23:57:14.093287 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 16 23:57:14.093361 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 23:57:14.094482 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 16 23:57:14.095195 systemd[1]: Stopped target basic.target - Basic System. Apr 16 23:57:14.095936 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 16 23:57:14.096635 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 23:57:14.097352 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 16 23:57:14.098062 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 16 23:57:14.098705 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 16 23:57:14.099451 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 23:57:14.100148 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 16 23:57:14.100808 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 16 23:57:14.101545 systemd[1]: Stopped target swap.target - Swaps. Apr 16 23:57:14.102196 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 16 23:57:14.102270 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 16 23:57:14.103288 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 16 23:57:14.104030 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 23:57:14.104647 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 16 23:57:14.104727 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 23:57:14.105382 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 16 23:57:14.105477 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 16 23:57:14.106492 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 16 23:57:14.106600 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 23:57:14.107245 systemd[1]: ignition-files.service: Deactivated successfully. Apr 16 23:57:14.107342 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 16 23:57:14.107903 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 16 23:57:14.108010 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 16 23:57:14.109206 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 16 23:57:14.111929 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 16 23:57:14.113032 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 16 23:57:14.113139 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 23:57:14.114628 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 16 23:57:14.114725 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 23:57:14.120541 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 16 23:57:14.120627 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 16 23:57:14.133673 ignition[1106]: INFO : Ignition 2.22.0 Apr 16 23:57:14.133673 ignition[1106]: INFO : Stage: umount Apr 16 23:57:14.134583 ignition[1106]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 23:57:14.134583 ignition[1106]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 16 23:57:14.134583 ignition[1106]: INFO : umount: umount passed Apr 16 23:57:14.134583 ignition[1106]: INFO : Ignition finished successfully Apr 16 23:57:14.136866 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 16 23:57:14.139360 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 16 23:57:14.139467 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 16 23:57:14.141015 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 16 23:57:14.141083 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 16 23:57:14.141514 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 16 23:57:14.141551 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 16 23:57:14.142208 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 16 23:57:14.142247 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 16 23:57:14.142832 systemd[1]: Stopped target network.target - Network. Apr 16 23:57:14.143891 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 16 23:57:14.143945 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 23:57:14.144702 systemd[1]: Stopped target paths.target - Path Units. Apr 16 23:57:14.145322 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 16 23:57:14.150160 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 23:57:14.150529 systemd[1]: Stopped target slices.target - Slice Units. Apr 16 23:57:14.151965 systemd[1]: Stopped target sockets.target - Socket Units. Apr 16 23:57:14.152645 systemd[1]: iscsid.socket: Deactivated successfully. Apr 16 23:57:14.152682 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 23:57:14.153842 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 16 23:57:14.153873 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 23:57:14.154443 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 16 23:57:14.154494 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 16 23:57:14.155611 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 16 23:57:14.155650 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 16 23:57:14.156081 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 16 23:57:14.156440 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 16 23:57:14.161294 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 16 23:57:14.161380 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 16 23:57:14.164861 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 16 23:57:14.166317 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 16 23:57:14.166442 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 16 23:57:14.168219 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 16 23:57:14.168645 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 16 23:57:14.169395 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 16 23:57:14.169436 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 16 23:57:14.171195 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 16 23:57:14.171536 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 16 23:57:14.171578 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 23:57:14.173225 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 16 23:57:14.173264 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 16 23:57:14.176251 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 16 23:57:14.176291 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 16 23:57:14.176736 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 16 23:57:14.176772 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 23:57:14.177438 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 23:57:14.181288 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 16 23:57:14.181366 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 16 23:57:14.189023 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 16 23:57:14.194407 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 16 23:57:14.195030 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 16 23:57:14.195067 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 16 23:57:14.196277 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 16 23:57:14.196373 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 16 23:57:14.198584 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 16 23:57:14.198736 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 23:57:14.199557 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 16 23:57:14.199595 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 16 23:57:14.199980 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 16 23:57:14.200007 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 23:57:14.200650 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 16 23:57:14.200697 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 16 23:57:14.201653 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 16 23:57:14.201690 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 16 23:57:14.202712 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 16 23:57:14.202751 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 23:57:14.205198 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 16 23:57:14.205550 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 16 23:57:14.205591 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 23:57:14.207214 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 16 23:57:14.207254 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 23:57:14.208403 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 16 23:57:14.208774 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 23:57:14.209657 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 16 23:57:14.209696 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 23:57:14.210776 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 23:57:14.210816 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:57:14.213005 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Apr 16 23:57:14.213192 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Apr 16 23:57:14.213228 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Apr 16 23:57:14.213263 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 16 23:57:14.219369 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 16 23:57:14.219475 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 16 23:57:14.220400 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 16 23:57:14.221366 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 16 23:57:14.238259 systemd[1]: Switching root. Apr 16 23:57:14.277891 systemd-journald[200]: Journal stopped Apr 16 23:57:15.361463 systemd-journald[200]: Received SIGTERM from PID 1 (systemd). Apr 16 23:57:15.361532 kernel: SELinux: policy capability network_peer_controls=1 Apr 16 23:57:15.361548 kernel: SELinux: policy capability open_perms=1 Apr 16 23:57:15.361557 kernel: SELinux: policy capability extended_socket_class=1 Apr 16 23:57:15.361569 kernel: SELinux: policy capability always_check_network=0 Apr 16 23:57:15.361582 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 16 23:57:15.361591 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 16 23:57:15.361600 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 16 23:57:15.361608 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 16 23:57:15.361618 kernel: SELinux: policy capability userspace_initial_context=0 Apr 16 23:57:15.361627 kernel: audit: type=1403 audit(1776383834.403:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 16 23:57:15.361636 systemd[1]: Successfully loaded SELinux policy in 52.676ms. Apr 16 23:57:15.361652 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.844ms. Apr 16 23:57:15.361662 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 16 23:57:15.361673 systemd[1]: Detected virtualization kvm. Apr 16 23:57:15.361682 systemd[1]: Detected architecture x86-64. Apr 16 23:57:15.361691 systemd[1]: Detected first boot. Apr 16 23:57:15.361704 systemd[1]: Hostname set to . Apr 16 23:57:15.361712 systemd[1]: Initializing machine ID from VM UUID. Apr 16 23:57:15.361724 zram_generator::config[1149]: No configuration found. Apr 16 23:57:15.361734 kernel: Guest personality initialized and is inactive Apr 16 23:57:15.361747 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Apr 16 23:57:15.361756 kernel: Initialized host personality Apr 16 23:57:15.361764 kernel: NET: Registered PF_VSOCK protocol family Apr 16 23:57:15.361772 systemd[1]: Populated /etc with preset unit settings. Apr 16 23:57:15.361784 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 16 23:57:15.361797 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 16 23:57:15.361806 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 16 23:57:15.361815 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 16 23:57:15.361828 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 16 23:57:15.361836 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 16 23:57:15.361850 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 16 23:57:15.361859 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 16 23:57:15.361870 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 16 23:57:15.361879 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 16 23:57:15.361888 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 16 23:57:15.361897 systemd[1]: Created slice user.slice - User and Session Slice. Apr 16 23:57:15.361906 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 23:57:15.361916 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 23:57:15.361924 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 16 23:57:15.361941 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 16 23:57:15.361952 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 16 23:57:15.361961 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 23:57:15.362350 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 16 23:57:15.362365 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 23:57:15.362375 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 23:57:15.362384 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 16 23:57:15.362393 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 16 23:57:15.362405 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 16 23:57:15.362414 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 16 23:57:15.362423 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 23:57:15.362431 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 23:57:15.362440 systemd[1]: Reached target slices.target - Slice Units. Apr 16 23:57:15.362449 systemd[1]: Reached target swap.target - Swaps. Apr 16 23:57:15.362458 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 16 23:57:15.362467 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 16 23:57:15.362477 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 16 23:57:15.362488 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 23:57:15.362498 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 23:57:15.362507 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 23:57:15.362516 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 16 23:57:15.362525 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 16 23:57:15.362533 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 16 23:57:15.362542 systemd[1]: Mounting media.mount - External Media Directory... Apr 16 23:57:15.362551 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 23:57:15.362560 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 16 23:57:15.362571 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 16 23:57:15.362580 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 16 23:57:15.362590 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 16 23:57:15.362598 systemd[1]: Reached target machines.target - Containers. Apr 16 23:57:15.362607 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 16 23:57:15.362616 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 23:57:15.362626 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 23:57:15.362634 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 16 23:57:15.362643 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 23:57:15.362654 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 23:57:15.362664 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 23:57:15.362848 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 16 23:57:15.362858 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 23:57:15.362867 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 16 23:57:15.362877 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 16 23:57:15.362886 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 16 23:57:15.362895 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 16 23:57:15.362907 systemd[1]: Stopped systemd-fsck-usr.service. Apr 16 23:57:15.362918 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 23:57:15.362942 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 23:57:15.362951 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 23:57:15.362961 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 16 23:57:15.362970 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 16 23:57:15.362981 kernel: loop: module loaded Apr 16 23:57:15.362990 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 16 23:57:15.362999 kernel: fuse: init (API version 7.41) Apr 16 23:57:15.363008 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 23:57:15.363017 systemd[1]: verity-setup.service: Deactivated successfully. Apr 16 23:57:15.363028 systemd[1]: Stopped verity-setup.service. Apr 16 23:57:15.363037 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 23:57:15.363046 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 16 23:57:15.363055 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 16 23:57:15.363064 systemd[1]: Mounted media.mount - External Media Directory. Apr 16 23:57:15.363072 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 16 23:57:15.363102 systemd-journald[1237]: Collecting audit messages is disabled. Apr 16 23:57:15.363147 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 16 23:57:15.363156 kernel: ACPI: bus type drm_connector registered Apr 16 23:57:15.363166 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 16 23:57:15.363175 systemd-journald[1237]: Journal started Apr 16 23:57:15.363192 systemd-journald[1237]: Runtime Journal (/run/log/journal/5b31e3e9938848f597aa3aed9d6ee35c) is 8M, max 76.1M, 68.1M free. Apr 16 23:57:15.034665 systemd[1]: Queued start job for default target multi-user.target. Apr 16 23:57:15.047727 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 16 23:57:15.048296 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 16 23:57:15.365181 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 16 23:57:15.369162 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 23:57:15.371162 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 23:57:15.371800 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 16 23:57:15.372091 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 16 23:57:15.372765 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 23:57:15.372994 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 23:57:15.373673 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 23:57:15.373894 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 23:57:15.374559 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 23:57:15.374777 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 23:57:15.375679 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 16 23:57:15.375887 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 16 23:57:15.376555 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 23:57:15.376771 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 23:57:15.377581 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 23:57:15.378260 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 23:57:15.378919 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 16 23:57:15.379678 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 16 23:57:15.391103 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 16 23:57:15.395188 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 16 23:57:15.397244 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 16 23:57:15.397653 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 16 23:57:15.397710 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 23:57:15.398806 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 16 23:57:15.406182 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 16 23:57:15.406637 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 23:57:15.409214 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 16 23:57:15.411423 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 16 23:57:15.411780 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 23:57:15.413262 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 16 23:57:15.413631 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 23:57:15.416963 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 23:57:15.420335 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 16 23:57:15.431971 systemd-journald[1237]: Time spent on flushing to /var/log/journal/5b31e3e9938848f597aa3aed9d6ee35c is 29.299ms for 1249 entries. Apr 16 23:57:15.431971 systemd-journald[1237]: System Journal (/var/log/journal/5b31e3e9938848f597aa3aed9d6ee35c) is 8M, max 584.8M, 576.8M free. Apr 16 23:57:15.493448 systemd-journald[1237]: Received client request to flush runtime journal. Apr 16 23:57:15.493492 kernel: loop0: detected capacity change from 0 to 228704 Apr 16 23:57:15.427229 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 16 23:57:15.429911 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 16 23:57:15.430347 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 16 23:57:15.467158 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 16 23:57:15.467726 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 16 23:57:15.469482 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 16 23:57:15.480266 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 23:57:15.499444 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 16 23:57:15.518138 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 16 23:57:15.526479 systemd-tmpfiles[1275]: ACLs are not supported, ignoring. Apr 16 23:57:15.526492 systemd-tmpfiles[1275]: ACLs are not supported, ignoring. Apr 16 23:57:15.532964 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 16 23:57:15.542622 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 23:57:15.546313 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 16 23:57:15.549833 kernel: loop1: detected capacity change from 0 to 128560 Apr 16 23:57:15.583721 kernel: loop2: detected capacity change from 0 to 8 Apr 16 23:57:15.599022 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 16 23:57:15.602217 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 23:57:15.613519 kernel: loop3: detected capacity change from 0 to 110984 Apr 16 23:57:15.622982 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 23:57:15.627679 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Apr 16 23:57:15.628034 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Apr 16 23:57:15.635996 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 23:57:15.649144 kernel: loop4: detected capacity change from 0 to 228704 Apr 16 23:57:15.667155 kernel: loop5: detected capacity change from 0 to 128560 Apr 16 23:57:15.681137 kernel: loop6: detected capacity change from 0 to 8 Apr 16 23:57:15.685255 kernel: loop7: detected capacity change from 0 to 110984 Apr 16 23:57:15.708825 (sd-merge)[1303]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 16 23:57:15.709697 (sd-merge)[1303]: Merged extensions into '/usr'. Apr 16 23:57:15.713257 systemd[1]: Reload requested from client PID 1274 ('systemd-sysext') (unit systemd-sysext.service)... Apr 16 23:57:15.713338 systemd[1]: Reloading... Apr 16 23:57:15.790832 zram_generator::config[1329]: No configuration found. Apr 16 23:57:15.890319 ldconfig[1269]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 16 23:57:15.953772 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 16 23:57:15.954269 systemd[1]: Reloading finished in 240 ms. Apr 16 23:57:15.968640 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 16 23:57:15.969558 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 16 23:57:15.970206 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 16 23:57:15.982339 systemd[1]: Starting ensure-sysext.service... Apr 16 23:57:15.985201 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 23:57:15.986575 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 23:57:15.995784 systemd[1]: Reload requested from client PID 1373 ('systemctl') (unit ensure-sysext.service)... Apr 16 23:57:15.995796 systemd[1]: Reloading... Apr 16 23:57:16.012434 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 16 23:57:16.012460 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 16 23:57:16.012686 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 16 23:57:16.012890 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 16 23:57:16.013637 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 16 23:57:16.013827 systemd-tmpfiles[1374]: ACLs are not supported, ignoring. Apr 16 23:57:16.013880 systemd-tmpfiles[1374]: ACLs are not supported, ignoring. Apr 16 23:57:16.020129 systemd-tmpfiles[1374]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 23:57:16.020204 systemd-tmpfiles[1374]: Skipping /boot Apr 16 23:57:16.032979 systemd-tmpfiles[1374]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 23:57:16.034211 systemd-tmpfiles[1374]: Skipping /boot Apr 16 23:57:16.036066 systemd-udevd[1375]: Using default interface naming scheme 'v255'. Apr 16 23:57:16.056138 zram_generator::config[1396]: No configuration found. Apr 16 23:57:16.269326 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 16 23:57:16.269552 systemd[1]: Reloading finished in 273 ms. Apr 16 23:57:16.278962 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 23:57:16.279628 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 23:57:16.286308 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input5 Apr 16 23:57:16.298343 kernel: mousedev: PS/2 mouse device common for all mice Apr 16 23:57:16.301264 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 16 23:57:16.306032 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 16 23:57:16.311364 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 16 23:57:16.314212 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 23:57:16.317523 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 23:57:16.318840 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 16 23:57:16.323679 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 23:57:16.323836 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 23:57:16.326189 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 23:57:16.334073 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 23:57:16.337641 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 23:57:16.338083 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 23:57:16.338191 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 23:57:16.338250 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 23:57:16.341340 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 23:57:16.341478 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 23:57:16.341599 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 23:57:16.341659 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 23:57:16.344458 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 16 23:57:16.344801 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 23:57:16.348877 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 23:57:16.349054 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 23:57:16.355127 kernel: ACPI: button: Power Button [PWRF] Apr 16 23:57:16.355366 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 23:57:16.356255 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 23:57:16.356339 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 23:57:16.356429 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 23:57:16.359779 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 23:57:16.359965 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 23:57:16.369470 systemd[1]: Finished ensure-sysext.service. Apr 16 23:57:16.377517 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 16 23:57:16.378993 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 16 23:57:16.389315 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 16 23:57:16.391422 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 16 23:57:16.405996 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 23:57:16.406812 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 23:57:16.407932 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 23:57:16.411791 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 16 23:57:16.420854 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 23:57:16.422828 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 23:57:16.424775 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 23:57:16.429148 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 23:57:16.429325 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 23:57:16.462615 augenrules[1529]: No rules Apr 16 23:57:16.463737 systemd[1]: audit-rules.service: Deactivated successfully. Apr 16 23:57:16.465173 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 16 23:57:16.474607 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 16 23:57:16.475281 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 16 23:57:16.478803 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 16 23:57:16.501834 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 16 23:57:16.502201 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 16 23:57:16.502366 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 16 23:57:16.521528 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Apr 16 23:57:16.521578 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 23:57:16.521691 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 23:57:16.524323 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 23:57:16.531631 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 23:57:16.533880 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 23:57:16.534274 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 23:57:16.534304 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 23:57:16.534323 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 16 23:57:16.534333 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 23:57:16.547763 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 23:57:16.552400 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 23:57:16.553243 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 23:57:16.553422 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 23:57:16.553956 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 23:57:16.554131 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 23:57:16.559393 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 23:57:16.559445 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 23:57:16.572758 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 16 23:57:16.578381 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 16 23:57:16.592995 kernel: EDAC MC: Ver: 3.0.0 Apr 16 23:57:16.614376 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 23:57:16.623254 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 16 23:57:16.646736 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Apr 16 23:57:16.646805 kernel: Console: switching to colour dummy device 80x25 Apr 16 23:57:16.648129 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Apr 16 23:57:16.650511 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 16 23:57:16.650542 kernel: [drm] features: -context_init Apr 16 23:57:16.653129 kernel: [drm] number of scanouts: 1 Apr 16 23:57:16.656264 kernel: [drm] number of cap sets: 0 Apr 16 23:57:16.656290 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Apr 16 23:57:16.660055 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Apr 16 23:57:16.660095 kernel: Console: switching to colour frame buffer device 160x50 Apr 16 23:57:16.662288 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 23:57:16.662474 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:57:16.664134 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 16 23:57:16.673379 systemd-networkd[1489]: lo: Link UP Apr 16 23:57:16.673392 systemd-networkd[1489]: lo: Gained carrier Apr 16 23:57:16.676486 systemd-networkd[1489]: Enumeration completed Apr 16 23:57:16.676955 systemd-networkd[1489]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:57:16.676965 systemd-networkd[1489]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 23:57:16.678200 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 16 23:57:16.678331 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 23:57:16.680267 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 16 23:57:16.682541 systemd-networkd[1489]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:57:16.682552 systemd-networkd[1489]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 23:57:16.682935 systemd-networkd[1489]: eth0: Link UP Apr 16 23:57:16.683094 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 16 23:57:16.684513 systemd-networkd[1489]: eth0: Gained carrier Apr 16 23:57:16.684525 systemd-networkd[1489]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:57:16.684908 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 23:57:16.690344 systemd-networkd[1489]: eth1: Link UP Apr 16 23:57:16.693924 systemd-networkd[1489]: eth1: Gained carrier Apr 16 23:57:16.693951 systemd-networkd[1489]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:57:16.695990 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 23:57:16.697204 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:57:16.710285 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 23:57:16.718602 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 16 23:57:16.729167 systemd-networkd[1489]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 16 23:57:16.744160 systemd-networkd[1489]: eth0: DHCPv4 address 77.42.22.14/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 16 23:57:16.759652 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 16 23:57:16.761574 systemd[1]: Reached target time-set.target - System Time Set. Apr 16 23:57:16.782327 systemd-resolved[1491]: Positive Trust Anchors: Apr 16 23:57:16.782342 systemd-resolved[1491]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 23:57:16.782363 systemd-resolved[1491]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 23:57:16.786352 systemd-resolved[1491]: Using system hostname 'ci-4459-2-4-n-3f94367fd3'. Apr 16 23:57:16.788136 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 23:57:16.788262 systemd[1]: Reached target network.target - Network. Apr 16 23:57:16.788313 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 23:57:16.812556 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:57:16.812809 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 23:57:16.812955 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 16 23:57:16.813036 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 16 23:57:16.813102 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Apr 16 23:57:16.814103 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 16 23:57:16.814275 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 16 23:57:16.814335 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 16 23:57:16.814387 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 16 23:57:16.814410 systemd[1]: Reached target paths.target - Path Units. Apr 16 23:57:16.814453 systemd[1]: Reached target timers.target - Timer Units. Apr 16 23:57:16.816252 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 16 23:57:16.819085 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 16 23:57:16.821907 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 16 23:57:16.824436 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 16 23:57:16.827434 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 16 23:57:16.836765 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 16 23:57:16.837918 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 16 23:57:16.841716 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 16 23:57:16.844099 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 23:57:16.844823 systemd[1]: Reached target basic.target - Basic System. Apr 16 23:57:16.847762 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 16 23:57:16.847791 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 16 23:57:16.848659 systemd[1]: Starting containerd.service - containerd container runtime... Apr 16 23:57:16.852774 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 16 23:57:16.859244 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 16 23:57:16.862019 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 16 23:57:16.864182 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 16 23:57:16.867150 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 16 23:57:16.869726 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 16 23:57:16.874938 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Apr 16 23:57:16.879859 jq[1593]: false Apr 16 23:57:16.881602 systemd-timesyncd[1508]: Contacted time server 37.221.199.157:123 (0.flatcar.pool.ntp.org). Apr 16 23:57:16.881832 systemd-timesyncd[1508]: Initial clock synchronization to Thu 2026-04-16 23:57:16.798444 UTC. Apr 16 23:57:16.882240 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 16 23:57:16.886737 coreos-metadata[1588]: Apr 16 23:57:16.886 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 16 23:57:16.886188 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 16 23:57:16.887089 coreos-metadata[1588]: Apr 16 23:57:16.887 INFO Fetch successful Apr 16 23:57:16.887324 coreos-metadata[1588]: Apr 16 23:57:16.887 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 16 23:57:16.888886 coreos-metadata[1588]: Apr 16 23:57:16.887 INFO Fetch successful Apr 16 23:57:16.892277 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 16 23:57:16.895932 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 16 23:57:16.897951 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 16 23:57:16.905122 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 16 23:57:16.909187 google_oslogin_nss_cache[1595]: oslogin_cache_refresh[1595]: Refreshing passwd entry cache Apr 16 23:57:16.908894 oslogin_cache_refresh[1595]: Refreshing passwd entry cache Apr 16 23:57:16.910312 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 16 23:57:16.910682 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 16 23:57:16.913225 systemd[1]: Starting update-engine.service - Update Engine... Apr 16 23:57:16.915311 google_oslogin_nss_cache[1595]: oslogin_cache_refresh[1595]: Failure getting users, quitting Apr 16 23:57:16.917552 oslogin_cache_refresh[1595]: Failure getting users, quitting Apr 16 23:57:16.921367 google_oslogin_nss_cache[1595]: oslogin_cache_refresh[1595]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 16 23:57:16.921367 google_oslogin_nss_cache[1595]: oslogin_cache_refresh[1595]: Refreshing group entry cache Apr 16 23:57:16.917987 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 16 23:57:16.917573 oslogin_cache_refresh[1595]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 16 23:57:16.917607 oslogin_cache_refresh[1595]: Refreshing group entry cache Apr 16 23:57:16.923126 google_oslogin_nss_cache[1595]: oslogin_cache_refresh[1595]: Failure getting groups, quitting Apr 16 23:57:16.923126 google_oslogin_nss_cache[1595]: oslogin_cache_refresh[1595]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 16 23:57:16.921929 oslogin_cache_refresh[1595]: Failure getting groups, quitting Apr 16 23:57:16.921938 oslogin_cache_refresh[1595]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 16 23:57:16.928154 extend-filesystems[1594]: Found /dev/sda6 Apr 16 23:57:16.933804 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 16 23:57:16.934473 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 16 23:57:16.934661 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 16 23:57:16.934916 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Apr 16 23:57:16.935095 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Apr 16 23:57:16.942454 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 16 23:57:16.942649 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 16 23:57:16.954959 extend-filesystems[1594]: Found /dev/sda9 Apr 16 23:57:16.958777 systemd[1]: motdgen.service: Deactivated successfully. Apr 16 23:57:16.959464 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 16 23:57:16.968438 extend-filesystems[1594]: Checking size of /dev/sda9 Apr 16 23:57:16.970627 jq[1609]: true Apr 16 23:57:16.997424 (ntainerd)[1631]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 16 23:57:16.999015 tar[1620]: linux-amd64/LICENSE Apr 16 23:57:17.005148 update_engine[1607]: I20260416 23:57:17.002719 1607 main.cc:92] Flatcar Update Engine starting Apr 16 23:57:17.008166 tar[1620]: linux-amd64/helm Apr 16 23:57:17.008208 extend-filesystems[1594]: Resized partition /dev/sda9 Apr 16 23:57:17.010759 dbus-daemon[1589]: [system] SELinux support is enabled Apr 16 23:57:17.010888 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 16 23:57:17.019573 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 16 23:57:17.020764 extend-filesystems[1644]: resize2fs 1.47.3 (8-Jul-2025) Apr 16 23:57:17.019599 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 16 23:57:17.023638 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 16 23:57:17.023653 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 16 23:57:17.039212 update_engine[1607]: I20260416 23:57:17.035187 1607 update_check_scheduler.cc:74] Next update check in 4m23s Apr 16 23:57:17.036146 systemd[1]: Started update-engine.service - Update Engine. Apr 16 23:57:17.048136 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 19393531 blocks Apr 16 23:57:17.048204 jq[1633]: true Apr 16 23:57:17.053027 systemd-logind[1606]: New seat seat0. Apr 16 23:57:17.055576 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 16 23:57:17.065452 systemd-logind[1606]: Watching system buttons on /dev/input/event3 (Power Button) Apr 16 23:57:17.065472 systemd-logind[1606]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 16 23:57:17.065620 systemd[1]: Started systemd-logind.service - User Login Management. Apr 16 23:57:17.097795 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 16 23:57:17.105031 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 16 23:57:17.184859 bash[1670]: Updated "/home/core/.ssh/authorized_keys" Apr 16 23:57:17.185135 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 16 23:57:17.193184 systemd[1]: Starting sshkeys.service... Apr 16 23:57:17.226475 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 16 23:57:17.231021 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 16 23:57:17.319462 containerd[1631]: time="2026-04-16T23:57:17Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 16 23:57:17.322907 containerd[1631]: time="2026-04-16T23:57:17.322819659Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Apr 16 23:57:17.340144 containerd[1631]: time="2026-04-16T23:57:17.339910590Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.146µs" Apr 16 23:57:17.340144 containerd[1631]: time="2026-04-16T23:57:17.340026850Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 16 23:57:17.340144 containerd[1631]: time="2026-04-16T23:57:17.340043951Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 16 23:57:17.340969 containerd[1631]: time="2026-04-16T23:57:17.340954316Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 16 23:57:17.341201 containerd[1631]: time="2026-04-16T23:57:17.341189478Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 16 23:57:17.341260 containerd[1631]: time="2026-04-16T23:57:17.341242383Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 16 23:57:17.341392 containerd[1631]: time="2026-04-16T23:57:17.341379672Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 16 23:57:17.341721 containerd[1631]: time="2026-04-16T23:57:17.341675606Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 16 23:57:17.342228 containerd[1631]: time="2026-04-16T23:57:17.342212660Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 16 23:57:17.343909 containerd[1631]: time="2026-04-16T23:57:17.342341330Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 16 23:57:17.343909 containerd[1631]: time="2026-04-16T23:57:17.342367634Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 16 23:57:17.343909 containerd[1631]: time="2026-04-16T23:57:17.342374729Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 16 23:57:17.343909 containerd[1631]: time="2026-04-16T23:57:17.343179513Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 16 23:57:17.343909 containerd[1631]: time="2026-04-16T23:57:17.343882169Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 16 23:57:17.344167 containerd[1631]: time="2026-04-16T23:57:17.344021754Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 16 23:57:17.344167 containerd[1631]: time="2026-04-16T23:57:17.344032976Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 16 23:57:17.344167 containerd[1631]: time="2026-04-16T23:57:17.344052768Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 16 23:57:17.344715 containerd[1631]: time="2026-04-16T23:57:17.344576492Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 16 23:57:17.345031 containerd[1631]: time="2026-04-16T23:57:17.344801758Z" level=info msg="metadata content store policy set" policy=shared Apr 16 23:57:17.348508 sshd_keygen[1617]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 16 23:57:17.352045 coreos-metadata[1675]: Apr 16 23:57:17.351 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 16 23:57:17.355289 coreos-metadata[1675]: Apr 16 23:57:17.352 INFO Fetch successful Apr 16 23:57:17.356170 locksmithd[1646]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 16 23:57:17.363515 unknown[1675]: wrote ssh authorized keys file for user: core Apr 16 23:57:17.370454 containerd[1631]: time="2026-04-16T23:57:17.370432503Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 16 23:57:17.372040 containerd[1631]: time="2026-04-16T23:57:17.370870328Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 16 23:57:17.372040 containerd[1631]: time="2026-04-16T23:57:17.370890021Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 16 23:57:17.372040 containerd[1631]: time="2026-04-16T23:57:17.370898601Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 16 23:57:17.372040 containerd[1631]: time="2026-04-16T23:57:17.370907458Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 16 23:57:17.372040 containerd[1631]: time="2026-04-16T23:57:17.370969240Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 16 23:57:17.372040 containerd[1631]: time="2026-04-16T23:57:17.370976781Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 16 23:57:17.372040 containerd[1631]: time="2026-04-16T23:57:17.370985519Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 16 23:57:17.372040 containerd[1631]: time="2026-04-16T23:57:17.371004104Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 16 23:57:17.372040 containerd[1631]: time="2026-04-16T23:57:17.371011190Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 16 23:57:17.372040 containerd[1631]: time="2026-04-16T23:57:17.371017790Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 16 23:57:17.372040 containerd[1631]: time="2026-04-16T23:57:17.371028439Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 16 23:57:17.372040 containerd[1631]: time="2026-04-16T23:57:17.371150508Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 16 23:57:17.372040 containerd[1631]: time="2026-04-16T23:57:17.371168360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 16 23:57:17.372040 containerd[1631]: time="2026-04-16T23:57:17.371178098Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 16 23:57:17.372342 containerd[1631]: time="2026-04-16T23:57:17.371185105Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 16 23:57:17.372342 containerd[1631]: time="2026-04-16T23:57:17.371192497Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 16 23:57:17.372342 containerd[1631]: time="2026-04-16T23:57:17.371199840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 16 23:57:17.372342 containerd[1631]: time="2026-04-16T23:57:17.371213506Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 16 23:57:17.372342 containerd[1631]: time="2026-04-16T23:57:17.371223927Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 16 23:57:17.372342 containerd[1631]: time="2026-04-16T23:57:17.371232210Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 16 23:57:17.372342 containerd[1631]: time="2026-04-16T23:57:17.371243333Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 16 23:57:17.372342 containerd[1631]: time="2026-04-16T23:57:17.371249895Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 16 23:57:17.372342 containerd[1631]: time="2026-04-16T23:57:17.371288638Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 16 23:57:17.372342 containerd[1631]: time="2026-04-16T23:57:17.371297693Z" level=info msg="Start snapshots syncer" Apr 16 23:57:17.372342 containerd[1631]: time="2026-04-16T23:57:17.371309321Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 16 23:57:17.372495 containerd[1631]: time="2026-04-16T23:57:17.371480841Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 16 23:57:17.372495 containerd[1631]: time="2026-04-16T23:57:17.371514597Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 16 23:57:17.372587 containerd[1631]: time="2026-04-16T23:57:17.371544404Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 16 23:57:17.372587 containerd[1631]: time="2026-04-16T23:57:17.371622138Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 16 23:57:17.372587 containerd[1631]: time="2026-04-16T23:57:17.371635359Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 16 23:57:17.372587 containerd[1631]: time="2026-04-16T23:57:17.371642831Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 16 23:57:17.372587 containerd[1631]: time="2026-04-16T23:57:17.371649946Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 16 23:57:17.372587 containerd[1631]: time="2026-04-16T23:57:17.371657893Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 16 23:57:17.372587 containerd[1631]: time="2026-04-16T23:57:17.371665503Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 16 23:57:17.372587 containerd[1631]: time="2026-04-16T23:57:17.371673192Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 16 23:57:17.372587 containerd[1631]: time="2026-04-16T23:57:17.371695983Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 16 23:57:17.372587 containerd[1631]: time="2026-04-16T23:57:17.371703751Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 16 23:57:17.372587 containerd[1631]: time="2026-04-16T23:57:17.371710689Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 16 23:57:17.372587 containerd[1631]: time="2026-04-16T23:57:17.371742999Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 16 23:57:17.372587 containerd[1631]: time="2026-04-16T23:57:17.371752302Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 16 23:57:17.372587 containerd[1631]: time="2026-04-16T23:57:17.371758279Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 16 23:57:17.372766 containerd[1631]: time="2026-04-16T23:57:17.371764504Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 16 23:57:17.372766 containerd[1631]: time="2026-04-16T23:57:17.371770184Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 16 23:57:17.372766 containerd[1631]: time="2026-04-16T23:57:17.371776498Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 16 23:57:17.372766 containerd[1631]: time="2026-04-16T23:57:17.371788344Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 16 23:57:17.372766 containerd[1631]: time="2026-04-16T23:57:17.371800734Z" level=info msg="runtime interface created" Apr 16 23:57:17.372766 containerd[1631]: time="2026-04-16T23:57:17.371804702Z" level=info msg="created NRI interface" Apr 16 23:57:17.372766 containerd[1631]: time="2026-04-16T23:57:17.371810897Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 16 23:57:17.372766 containerd[1631]: time="2026-04-16T23:57:17.371818467Z" level=info msg="Connect containerd service" Apr 16 23:57:17.372766 containerd[1631]: time="2026-04-16T23:57:17.371830897Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 16 23:57:17.375149 containerd[1631]: time="2026-04-16T23:57:17.374396036Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 16 23:57:17.376847 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 16 23:57:17.391564 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 16 23:57:17.395316 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 16 23:57:17.400230 systemd[1]: Started sshd@0-77.42.22.14:22-4.175.71.9:60768.service - OpenSSH per-connection server daemon (4.175.71.9:60768). Apr 16 23:57:17.417141 kernel: EXT4-fs (sda9): resized filesystem to 19393531 Apr 16 23:57:17.426962 systemd[1]: issuegen.service: Deactivated successfully. Apr 16 23:57:17.427320 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 16 23:57:17.436798 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 16 23:57:17.457053 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 16 23:57:17.461549 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 16 23:57:17.466193 extend-filesystems[1644]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 16 23:57:17.466193 extend-filesystems[1644]: old_desc_blocks = 1, new_desc_blocks = 10 Apr 16 23:57:17.466193 extend-filesystems[1644]: The filesystem on /dev/sda9 is now 19393531 (4k) blocks long. Apr 16 23:57:17.485650 extend-filesystems[1594]: Resized filesystem in /dev/sda9 Apr 16 23:57:17.474139 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 16 23:57:17.489403 update-ssh-keys[1697]: Updated "/home/core/.ssh/authorized_keys" Apr 16 23:57:17.479703 systemd[1]: Reached target getty.target - Login Prompts. Apr 16 23:57:17.484536 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 16 23:57:17.484742 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 16 23:57:17.489571 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 16 23:57:17.493975 systemd[1]: Finished sshkeys.service. Apr 16 23:57:17.500593 containerd[1631]: time="2026-04-16T23:57:17.500561273Z" level=info msg="Start subscribing containerd event" Apr 16 23:57:17.500641 containerd[1631]: time="2026-04-16T23:57:17.500602282Z" level=info msg="Start recovering state" Apr 16 23:57:17.500712 containerd[1631]: time="2026-04-16T23:57:17.500684668Z" level=info msg="Start event monitor" Apr 16 23:57:17.500728 containerd[1631]: time="2026-04-16T23:57:17.500721135Z" level=info msg="Start cni network conf syncer for default" Apr 16 23:57:17.500744 containerd[1631]: time="2026-04-16T23:57:17.500729200Z" level=info msg="Start streaming server" Apr 16 23:57:17.500744 containerd[1631]: time="2026-04-16T23:57:17.500737592Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 16 23:57:17.500744 containerd[1631]: time="2026-04-16T23:57:17.500743035Z" level=info msg="runtime interface starting up..." Apr 16 23:57:17.500790 containerd[1631]: time="2026-04-16T23:57:17.500747795Z" level=info msg="starting plugins..." Apr 16 23:57:17.500790 containerd[1631]: time="2026-04-16T23:57:17.500759255Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 16 23:57:17.501222 containerd[1631]: time="2026-04-16T23:57:17.501103868Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 16 23:57:17.501285 containerd[1631]: time="2026-04-16T23:57:17.501270559Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 16 23:57:17.501332 containerd[1631]: time="2026-04-16T23:57:17.501312875Z" level=info msg="containerd successfully booted in 0.183098s" Apr 16 23:57:17.501386 systemd[1]: Started containerd.service - containerd container runtime. Apr 16 23:57:17.569083 tar[1620]: linux-amd64/README.md Apr 16 23:57:17.587087 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 16 23:57:17.616062 sshd[1701]: Accepted publickey for core from 4.175.71.9 port 60768 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:57:17.617724 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:57:17.623218 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 16 23:57:17.626593 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 16 23:57:17.632944 systemd-logind[1606]: New session 1 of user core. Apr 16 23:57:17.642060 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 16 23:57:17.647320 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 16 23:57:17.662419 (systemd)[1730]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 16 23:57:17.664899 systemd-logind[1606]: New session c1 of user core. Apr 16 23:57:17.771670 systemd[1730]: Queued start job for default target default.target. Apr 16 23:57:17.782207 systemd[1730]: Created slice app.slice - User Application Slice. Apr 16 23:57:17.782230 systemd[1730]: Reached target paths.target - Paths. Apr 16 23:57:17.782266 systemd[1730]: Reached target timers.target - Timers. Apr 16 23:57:17.783607 systemd[1730]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 16 23:57:17.807206 systemd[1730]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 16 23:57:17.807370 systemd[1730]: Reached target sockets.target - Sockets. Apr 16 23:57:17.807441 systemd[1730]: Reached target basic.target - Basic System. Apr 16 23:57:17.807616 systemd[1730]: Reached target default.target - Main User Target. Apr 16 23:57:17.807697 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 16 23:57:17.807770 systemd[1730]: Startup finished in 137ms. Apr 16 23:57:17.826347 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 16 23:57:17.940762 systemd[1]: Started sshd@1-77.42.22.14:22-4.175.71.9:39748.service - OpenSSH per-connection server daemon (4.175.71.9:39748). Apr 16 23:57:18.133388 systemd-networkd[1489]: eth0: Gained IPv6LL Apr 16 23:57:18.138169 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 16 23:57:18.141493 systemd[1]: Reached target network-online.target - Network is Online. Apr 16 23:57:18.148687 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:57:18.154474 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 16 23:57:18.160239 sshd[1741]: Accepted publickey for core from 4.175.71.9 port 39748 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:57:18.164074 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:57:18.177489 systemd-logind[1606]: New session 2 of user core. Apr 16 23:57:18.184264 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 16 23:57:18.213662 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 16 23:57:18.261211 sshd[1751]: Connection closed by 4.175.71.9 port 39748 Apr 16 23:57:18.262437 sshd-session[1741]: pam_unix(sshd:session): session closed for user core Apr 16 23:57:18.267428 systemd[1]: sshd@1-77.42.22.14:22-4.175.71.9:39748.service: Deactivated successfully. Apr 16 23:57:18.269364 systemd[1]: session-2.scope: Deactivated successfully. Apr 16 23:57:18.271227 systemd-logind[1606]: Session 2 logged out. Waiting for processes to exit. Apr 16 23:57:18.272939 systemd-logind[1606]: Removed session 2. Apr 16 23:57:18.300022 systemd[1]: Started sshd@2-77.42.22.14:22-4.175.71.9:39760.service - OpenSSH per-connection server daemon (4.175.71.9:39760). Apr 16 23:57:18.453308 systemd-networkd[1489]: eth1: Gained IPv6LL Apr 16 23:57:18.483989 sshd[1762]: Accepted publickey for core from 4.175.71.9 port 39760 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:57:18.484840 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:57:18.494154 systemd-logind[1606]: New session 3 of user core. Apr 16 23:57:18.502234 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 16 23:57:18.582695 sshd[1765]: Connection closed by 4.175.71.9 port 39760 Apr 16 23:57:18.583766 sshd-session[1762]: pam_unix(sshd:session): session closed for user core Apr 16 23:57:18.588603 systemd[1]: sshd@2-77.42.22.14:22-4.175.71.9:39760.service: Deactivated successfully. Apr 16 23:57:18.590649 systemd[1]: session-3.scope: Deactivated successfully. Apr 16 23:57:18.592724 systemd-logind[1606]: Session 3 logged out. Waiting for processes to exit. Apr 16 23:57:18.595450 systemd-logind[1606]: Removed session 3. Apr 16 23:57:18.972226 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:57:18.973240 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 16 23:57:18.975898 systemd[1]: Startup finished in 2.885s (kernel) + 5.752s (initrd) + 4.624s (userspace) = 13.261s. Apr 16 23:57:18.986267 (kubelet)[1775]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 23:57:19.461869 kubelet[1775]: E0416 23:57:19.461751 1775 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 23:57:19.464598 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 23:57:19.464758 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 23:57:19.465098 systemd[1]: kubelet.service: Consumed 877ms CPU time, 267.6M memory peak. Apr 16 23:57:28.578758 systemd[1]: Started sshd@3-77.42.22.14:22-4.175.71.9:57884.service - OpenSSH per-connection server daemon (4.175.71.9:57884). Apr 16 23:57:28.791187 sshd[1787]: Accepted publickey for core from 4.175.71.9 port 57884 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:57:28.793100 sshd-session[1787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:57:28.802191 systemd-logind[1606]: New session 4 of user core. Apr 16 23:57:28.809328 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 16 23:57:28.892290 sshd[1790]: Connection closed by 4.175.71.9 port 57884 Apr 16 23:57:28.893428 sshd-session[1787]: pam_unix(sshd:session): session closed for user core Apr 16 23:57:28.899208 systemd[1]: sshd@3-77.42.22.14:22-4.175.71.9:57884.service: Deactivated successfully. Apr 16 23:57:28.903094 systemd[1]: session-4.scope: Deactivated successfully. Apr 16 23:57:28.905454 systemd-logind[1606]: Session 4 logged out. Waiting for processes to exit. Apr 16 23:57:28.908469 systemd-logind[1606]: Removed session 4. Apr 16 23:57:28.936031 systemd[1]: Started sshd@4-77.42.22.14:22-4.175.71.9:57900.service - OpenSSH per-connection server daemon (4.175.71.9:57900). Apr 16 23:57:29.142945 sshd[1796]: Accepted publickey for core from 4.175.71.9 port 57900 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:57:29.145411 sshd-session[1796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:57:29.150881 systemd-logind[1606]: New session 5 of user core. Apr 16 23:57:29.167387 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 16 23:57:29.236287 sshd[1799]: Connection closed by 4.175.71.9 port 57900 Apr 16 23:57:29.236842 sshd-session[1796]: pam_unix(sshd:session): session closed for user core Apr 16 23:57:29.241688 systemd[1]: sshd@4-77.42.22.14:22-4.175.71.9:57900.service: Deactivated successfully. Apr 16 23:57:29.243653 systemd[1]: session-5.scope: Deactivated successfully. Apr 16 23:57:29.245070 systemd-logind[1606]: Session 5 logged out. Waiting for processes to exit. Apr 16 23:57:29.247292 systemd-logind[1606]: Removed session 5. Apr 16 23:57:29.281289 systemd[1]: Started sshd@5-77.42.22.14:22-4.175.71.9:57912.service - OpenSSH per-connection server daemon (4.175.71.9:57912). Apr 16 23:57:29.483261 sshd[1805]: Accepted publickey for core from 4.175.71.9 port 57912 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:57:29.485308 sshd-session[1805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:57:29.486838 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 16 23:57:29.489861 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:57:29.498462 systemd-logind[1606]: New session 6 of user core. Apr 16 23:57:29.503828 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 16 23:57:29.583135 sshd[1811]: Connection closed by 4.175.71.9 port 57912 Apr 16 23:57:29.584649 sshd-session[1805]: pam_unix(sshd:session): session closed for user core Apr 16 23:57:29.588624 systemd[1]: sshd@5-77.42.22.14:22-4.175.71.9:57912.service: Deactivated successfully. Apr 16 23:57:29.591485 systemd[1]: session-6.scope: Deactivated successfully. Apr 16 23:57:29.593761 systemd-logind[1606]: Session 6 logged out. Waiting for processes to exit. Apr 16 23:57:29.595406 systemd-logind[1606]: Removed session 6. Apr 16 23:57:29.629288 systemd[1]: Started sshd@6-77.42.22.14:22-4.175.71.9:57922.service - OpenSSH per-connection server daemon (4.175.71.9:57922). Apr 16 23:57:29.651926 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:57:29.660663 (kubelet)[1824]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 23:57:29.695240 kubelet[1824]: E0416 23:57:29.695183 1824 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 23:57:29.699407 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 23:57:29.699593 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 23:57:29.699960 systemd[1]: kubelet.service: Consumed 172ms CPU time, 110.8M memory peak. Apr 16 23:57:29.822301 sshd[1817]: Accepted publickey for core from 4.175.71.9 port 57922 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:57:29.824305 sshd-session[1817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:57:29.833195 systemd-logind[1606]: New session 7 of user core. Apr 16 23:57:29.840343 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 16 23:57:29.903623 sudo[1834]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 16 23:57:29.904315 sudo[1834]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 23:57:29.924598 sudo[1834]: pam_unix(sudo:session): session closed for user root Apr 16 23:57:29.956053 sshd[1833]: Connection closed by 4.175.71.9 port 57922 Apr 16 23:57:29.957448 sshd-session[1817]: pam_unix(sshd:session): session closed for user core Apr 16 23:57:29.963220 systemd[1]: sshd@6-77.42.22.14:22-4.175.71.9:57922.service: Deactivated successfully. Apr 16 23:57:29.967015 systemd[1]: session-7.scope: Deactivated successfully. Apr 16 23:57:29.970368 systemd-logind[1606]: Session 7 logged out. Waiting for processes to exit. Apr 16 23:57:29.972562 systemd-logind[1606]: Removed session 7. Apr 16 23:57:30.000280 systemd[1]: Started sshd@7-77.42.22.14:22-4.175.71.9:57936.service - OpenSSH per-connection server daemon (4.175.71.9:57936). Apr 16 23:57:30.206634 sshd[1840]: Accepted publickey for core from 4.175.71.9 port 57936 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:57:30.208910 sshd-session[1840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:57:30.218622 systemd-logind[1606]: New session 8 of user core. Apr 16 23:57:30.229345 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 16 23:57:30.281914 sudo[1845]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 16 23:57:30.282719 sudo[1845]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 23:57:30.286911 sudo[1845]: pam_unix(sudo:session): session closed for user root Apr 16 23:57:30.297663 sudo[1844]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 16 23:57:30.298292 sudo[1844]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 23:57:30.316430 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 16 23:57:30.354853 augenrules[1867]: No rules Apr 16 23:57:30.356195 systemd[1]: audit-rules.service: Deactivated successfully. Apr 16 23:57:30.356455 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 16 23:57:30.357680 sudo[1844]: pam_unix(sudo:session): session closed for user root Apr 16 23:57:30.387932 sshd[1843]: Connection closed by 4.175.71.9 port 57936 Apr 16 23:57:30.389417 sshd-session[1840]: pam_unix(sshd:session): session closed for user core Apr 16 23:57:30.394577 systemd[1]: sshd@7-77.42.22.14:22-4.175.71.9:57936.service: Deactivated successfully. Apr 16 23:57:30.396517 systemd[1]: session-8.scope: Deactivated successfully. Apr 16 23:57:30.397844 systemd-logind[1606]: Session 8 logged out. Waiting for processes to exit. Apr 16 23:57:30.399591 systemd-logind[1606]: Removed session 8. Apr 16 23:57:30.430411 systemd[1]: Started sshd@8-77.42.22.14:22-4.175.71.9:57938.service - OpenSSH per-connection server daemon (4.175.71.9:57938). Apr 16 23:57:30.626881 sshd[1876]: Accepted publickey for core from 4.175.71.9 port 57938 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:57:30.629085 sshd-session[1876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:57:30.636162 systemd-logind[1606]: New session 9 of user core. Apr 16 23:57:30.645246 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 16 23:57:30.697944 sudo[1880]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 16 23:57:30.698613 sudo[1880]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 23:57:30.972093 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 16 23:57:30.981446 (dockerd)[1897]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 16 23:57:31.185205 dockerd[1897]: time="2026-04-16T23:57:31.185140461Z" level=info msg="Starting up" Apr 16 23:57:31.185782 dockerd[1897]: time="2026-04-16T23:57:31.185756113Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 16 23:57:31.197302 dockerd[1897]: time="2026-04-16T23:57:31.197250304Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 16 23:57:31.245656 dockerd[1897]: time="2026-04-16T23:57:31.245373455Z" level=info msg="Loading containers: start." Apr 16 23:57:31.254187 kernel: Initializing XFRM netlink socket Apr 16 23:57:31.486593 systemd-networkd[1489]: docker0: Link UP Apr 16 23:57:31.491784 dockerd[1897]: time="2026-04-16T23:57:31.491738198Z" level=info msg="Loading containers: done." Apr 16 23:57:31.507712 dockerd[1897]: time="2026-04-16T23:57:31.507450183Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 16 23:57:31.507712 dockerd[1897]: time="2026-04-16T23:57:31.507499714Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Apr 16 23:57:31.507712 dockerd[1897]: time="2026-04-16T23:57:31.507566157Z" level=info msg="Initializing buildkit" Apr 16 23:57:31.541453 dockerd[1897]: time="2026-04-16T23:57:31.541407399Z" level=info msg="Completed buildkit initialization" Apr 16 23:57:31.548430 dockerd[1897]: time="2026-04-16T23:57:31.548400920Z" level=info msg="Daemon has completed initialization" Apr 16 23:57:31.549282 dockerd[1897]: time="2026-04-16T23:57:31.548480583Z" level=info msg="API listen on /run/docker.sock" Apr 16 23:57:31.549169 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 16 23:57:32.086892 containerd[1631]: time="2026-04-16T23:57:32.086809057Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 16 23:57:32.703963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3577316008.mount: Deactivated successfully. Apr 16 23:57:33.754004 containerd[1631]: time="2026-04-16T23:57:33.753463637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:57:33.754382 containerd[1631]: time="2026-04-16T23:57:33.754348072Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30194089" Apr 16 23:57:33.754934 containerd[1631]: time="2026-04-16T23:57:33.754901962Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:57:33.757602 containerd[1631]: time="2026-04-16T23:57:33.757573771Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:57:33.758605 containerd[1631]: time="2026-04-16T23:57:33.758194989Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 1.671324776s" Apr 16 23:57:33.758605 containerd[1631]: time="2026-04-16T23:57:33.758216752Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 16 23:57:33.758670 containerd[1631]: time="2026-04-16T23:57:33.758602788Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 16 23:57:35.067370 containerd[1631]: time="2026-04-16T23:57:35.067312031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:57:35.068683 containerd[1631]: time="2026-04-16T23:57:35.068465308Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171469" Apr 16 23:57:35.069763 containerd[1631]: time="2026-04-16T23:57:35.069738501Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:57:35.074395 containerd[1631]: time="2026-04-16T23:57:35.074368960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:57:35.075094 containerd[1631]: time="2026-04-16T23:57:35.075052967Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 1.316433358s" Apr 16 23:57:35.075174 containerd[1631]: time="2026-04-16T23:57:35.075163533Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 16 23:57:35.075581 containerd[1631]: time="2026-04-16T23:57:35.075541808Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 16 23:57:36.128790 containerd[1631]: time="2026-04-16T23:57:36.128737767Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:57:36.129969 containerd[1631]: time="2026-04-16T23:57:36.129828320Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289778" Apr 16 23:57:36.130893 containerd[1631]: time="2026-04-16T23:57:36.130871922Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:57:36.132930 containerd[1631]: time="2026-04-16T23:57:36.132908198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:57:36.134701 containerd[1631]: time="2026-04-16T23:57:36.134673596Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 1.059110359s" Apr 16 23:57:36.134774 containerd[1631]: time="2026-04-16T23:57:36.134760515Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 16 23:57:36.136701 containerd[1631]: time="2026-04-16T23:57:36.136643707Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 16 23:57:37.140534 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3861427107.mount: Deactivated successfully. Apr 16 23:57:37.508539 containerd[1631]: time="2026-04-16T23:57:37.508329795Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:57:37.509568 containerd[1631]: time="2026-04-16T23:57:37.509462290Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010739" Apr 16 23:57:37.510382 containerd[1631]: time="2026-04-16T23:57:37.510359864Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:57:37.511993 containerd[1631]: time="2026-04-16T23:57:37.511970376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:57:37.512390 containerd[1631]: time="2026-04-16T23:57:37.512365112Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 1.375699892s" Apr 16 23:57:37.512444 containerd[1631]: time="2026-04-16T23:57:37.512434192Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 16 23:57:37.512880 containerd[1631]: time="2026-04-16T23:57:37.512852032Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 16 23:57:38.065894 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1700038803.mount: Deactivated successfully. Apr 16 23:57:38.930665 containerd[1631]: time="2026-04-16T23:57:38.930607672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:57:38.931621 containerd[1631]: time="2026-04-16T23:57:38.931597619Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942332" Apr 16 23:57:38.933167 containerd[1631]: time="2026-04-16T23:57:38.932505528Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:57:38.934470 containerd[1631]: time="2026-04-16T23:57:38.934429337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:57:38.935294 containerd[1631]: time="2026-04-16T23:57:38.934959373Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.422033015s" Apr 16 23:57:38.935294 containerd[1631]: time="2026-04-16T23:57:38.934990344Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 16 23:57:38.935402 containerd[1631]: time="2026-04-16T23:57:38.935375481Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 16 23:57:39.430456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2622310373.mount: Deactivated successfully. Apr 16 23:57:39.437018 containerd[1631]: time="2026-04-16T23:57:39.436985613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 23:57:39.437892 containerd[1631]: time="2026-04-16T23:57:39.437703607Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Apr 16 23:57:39.438863 containerd[1631]: time="2026-04-16T23:57:39.438844339Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 23:57:39.440807 containerd[1631]: time="2026-04-16T23:57:39.440786280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 23:57:39.441363 containerd[1631]: time="2026-04-16T23:57:39.441344053Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 505.93822ms" Apr 16 23:57:39.441425 containerd[1631]: time="2026-04-16T23:57:39.441414214Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 16 23:57:39.441947 containerd[1631]: time="2026-04-16T23:57:39.441925243Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 16 23:57:39.945153 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 16 23:57:39.948385 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:57:39.955878 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1693485201.mount: Deactivated successfully. Apr 16 23:57:40.110416 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:57:40.120636 (kubelet)[2257]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 23:57:40.152145 kubelet[2257]: E0416 23:57:40.152101 2257 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 23:57:40.155476 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 23:57:40.155634 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 23:57:40.156170 systemd[1]: kubelet.service: Consumed 149ms CPU time, 110.6M memory peak. Apr 16 23:57:40.813597 containerd[1631]: time="2026-04-16T23:57:40.813563556Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:57:40.831292 containerd[1631]: time="2026-04-16T23:57:40.831265498Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23719532" Apr 16 23:57:40.835436 containerd[1631]: time="2026-04-16T23:57:40.835324562Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:57:40.837147 containerd[1631]: time="2026-04-16T23:57:40.837128713Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:57:40.837723 containerd[1631]: time="2026-04-16T23:57:40.837703806Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.395756985s" Apr 16 23:57:40.837761 containerd[1631]: time="2026-04-16T23:57:40.837723697Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 16 23:57:42.798298 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:57:42.798414 systemd[1]: kubelet.service: Consumed 149ms CPU time, 110.6M memory peak. Apr 16 23:57:42.800852 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:57:42.828415 systemd[1]: Reload requested from client PID 2343 ('systemctl') (unit session-9.scope)... Apr 16 23:57:42.828428 systemd[1]: Reloading... Apr 16 23:57:42.957142 zram_generator::config[2395]: No configuration found. Apr 16 23:57:43.120085 systemd[1]: Reloading finished in 291 ms. Apr 16 23:57:43.156663 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 16 23:57:43.156743 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 16 23:57:43.156984 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:57:43.157017 systemd[1]: kubelet.service: Consumed 120ms CPU time, 98.3M memory peak. Apr 16 23:57:43.158721 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:57:43.311327 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:57:43.323882 (kubelet)[2438]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 23:57:43.353884 kubelet[2438]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 23:57:43.353884 kubelet[2438]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 23:57:43.353884 kubelet[2438]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 23:57:43.354284 kubelet[2438]: I0416 23:57:43.353941 2438 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 23:57:43.543909 kubelet[2438]: I0416 23:57:43.543805 2438 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 16 23:57:43.543909 kubelet[2438]: I0416 23:57:43.543825 2438 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 23:57:43.544267 kubelet[2438]: I0416 23:57:43.544249 2438 server.go:956] "Client rotation is on, will bootstrap in background" Apr 16 23:57:43.563821 kubelet[2438]: E0416 23:57:43.563773 2438 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://77.42.22.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 77.42.22.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 23:57:43.565757 kubelet[2438]: I0416 23:57:43.565466 2438 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 23:57:43.577507 kubelet[2438]: I0416 23:57:43.577493 2438 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 16 23:57:43.580517 kubelet[2438]: I0416 23:57:43.580485 2438 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 16 23:57:43.581171 kubelet[2438]: I0416 23:57:43.581143 2438 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 23:57:43.581276 kubelet[2438]: I0416 23:57:43.581163 2438 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-4-n-3f94367fd3","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 23:57:43.581276 kubelet[2438]: I0416 23:57:43.581275 2438 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 23:57:43.581276 kubelet[2438]: I0416 23:57:43.581282 2438 container_manager_linux.go:303] "Creating device plugin manager" Apr 16 23:57:43.581587 kubelet[2438]: I0416 23:57:43.581382 2438 state_mem.go:36] "Initialized new in-memory state store" Apr 16 23:57:43.585675 kubelet[2438]: I0416 23:57:43.585636 2438 kubelet.go:480] "Attempting to sync node with API server" Apr 16 23:57:43.585675 kubelet[2438]: I0416 23:57:43.585653 2438 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 23:57:43.585675 kubelet[2438]: I0416 23:57:43.585672 2438 kubelet.go:386] "Adding apiserver pod source" Apr 16 23:57:43.587138 kubelet[2438]: I0416 23:57:43.586902 2438 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 23:57:43.593805 kubelet[2438]: E0416 23:57:43.593767 2438 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://77.42.22.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-4-n-3f94367fd3&limit=500&resourceVersion=0\": dial tcp 77.42.22.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 23:57:43.594702 kubelet[2438]: E0416 23:57:43.594668 2438 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://77.42.22.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 77.42.22.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 23:57:43.594846 kubelet[2438]: I0416 23:57:43.594807 2438 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 16 23:57:43.595906 kubelet[2438]: I0416 23:57:43.595584 2438 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 23:57:43.596570 kubelet[2438]: W0416 23:57:43.596548 2438 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 16 23:57:43.602463 kubelet[2438]: I0416 23:57:43.602441 2438 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 16 23:57:43.602512 kubelet[2438]: I0416 23:57:43.602504 2438 server.go:1289] "Started kubelet" Apr 16 23:57:43.602846 kubelet[2438]: I0416 23:57:43.602785 2438 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 23:57:43.604412 kubelet[2438]: I0416 23:57:43.604383 2438 server.go:317] "Adding debug handlers to kubelet server" Apr 16 23:57:43.608014 kubelet[2438]: I0416 23:57:43.607349 2438 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 23:57:43.608014 kubelet[2438]: I0416 23:57:43.607838 2438 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 23:57:43.609521 kubelet[2438]: I0416 23:57:43.609415 2438 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 23:57:43.609711 kubelet[2438]: E0416 23:57:43.607979 2438 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://77.42.22.14:6443/api/v1/namespaces/default/events\": dial tcp 77.42.22.14:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-4-n-3f94367fd3.18a6fbb656c2bb0e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-4-n-3f94367fd3,UID:ci-4459-2-4-n-3f94367fd3,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-4-n-3f94367fd3,},FirstTimestamp:2026-04-16 23:57:43.602461454 +0000 UTC m=+0.274782149,LastTimestamp:2026-04-16 23:57:43.602461454 +0000 UTC m=+0.274782149,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-4-n-3f94367fd3,}" Apr 16 23:57:43.610315 kubelet[2438]: I0416 23:57:43.610260 2438 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 23:57:43.618084 kubelet[2438]: E0416 23:57:43.617294 2438 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-3f94367fd3\" not found" Apr 16 23:57:43.618084 kubelet[2438]: I0416 23:57:43.617323 2438 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 16 23:57:43.618084 kubelet[2438]: I0416 23:57:43.617428 2438 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 16 23:57:43.618084 kubelet[2438]: I0416 23:57:43.617460 2438 reconciler.go:26] "Reconciler: start to sync state" Apr 16 23:57:43.618084 kubelet[2438]: E0416 23:57:43.617700 2438 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://77.42.22.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 77.42.22.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 23:57:43.618084 kubelet[2438]: E0416 23:57:43.617842 2438 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://77.42.22.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-3f94367fd3?timeout=10s\": dial tcp 77.42.22.14:6443: connect: connection refused" interval="200ms" Apr 16 23:57:43.620059 kubelet[2438]: I0416 23:57:43.620034 2438 factory.go:223] Registration of the systemd container factory successfully Apr 16 23:57:43.620458 kubelet[2438]: I0416 23:57:43.620271 2438 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 23:57:43.621515 kubelet[2438]: E0416 23:57:43.621495 2438 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 16 23:57:43.621651 kubelet[2438]: I0416 23:57:43.621635 2438 factory.go:223] Registration of the containerd container factory successfully Apr 16 23:57:43.635816 kubelet[2438]: I0416 23:57:43.635769 2438 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 16 23:57:43.640165 kubelet[2438]: I0416 23:57:43.639834 2438 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 16 23:57:43.640165 kubelet[2438]: I0416 23:57:43.639849 2438 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 16 23:57:43.640165 kubelet[2438]: I0416 23:57:43.639863 2438 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 23:57:43.640165 kubelet[2438]: I0416 23:57:43.639870 2438 kubelet.go:2436] "Starting kubelet main sync loop" Apr 16 23:57:43.640165 kubelet[2438]: E0416 23:57:43.639902 2438 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 23:57:43.644663 kubelet[2438]: E0416 23:57:43.644642 2438 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://77.42.22.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 77.42.22.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 23:57:43.645270 kubelet[2438]: I0416 23:57:43.645242 2438 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 16 23:57:43.645270 kubelet[2438]: I0416 23:57:43.645253 2438 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 16 23:57:43.645270 kubelet[2438]: I0416 23:57:43.645265 2438 state_mem.go:36] "Initialized new in-memory state store" Apr 16 23:57:43.646675 kubelet[2438]: I0416 23:57:43.646651 2438 policy_none.go:49] "None policy: Start" Apr 16 23:57:43.646675 kubelet[2438]: I0416 23:57:43.646666 2438 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 16 23:57:43.646675 kubelet[2438]: I0416 23:57:43.646676 2438 state_mem.go:35] "Initializing new in-memory state store" Apr 16 23:57:43.651217 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 16 23:57:43.661043 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 16 23:57:43.664005 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 16 23:57:43.670773 kubelet[2438]: E0416 23:57:43.670747 2438 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 23:57:43.670931 kubelet[2438]: I0416 23:57:43.670916 2438 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 23:57:43.670950 kubelet[2438]: I0416 23:57:43.670928 2438 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 23:57:43.671555 kubelet[2438]: I0416 23:57:43.671539 2438 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 23:57:43.672628 kubelet[2438]: E0416 23:57:43.672612 2438 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 23:57:43.672709 kubelet[2438]: E0416 23:57:43.672644 2438 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-2-4-n-3f94367fd3\" not found" Apr 16 23:57:43.760892 systemd[1]: Created slice kubepods-burstable-pod489920b315fb52d002027ba533ed98f2.slice - libcontainer container kubepods-burstable-pod489920b315fb52d002027ba533ed98f2.slice. Apr 16 23:57:43.774729 kubelet[2438]: I0416 23:57:43.774348 2438 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:43.774895 kubelet[2438]: E0416 23:57:43.774837 2438 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://77.42.22.14:6443/api/v1/nodes\": dial tcp 77.42.22.14:6443: connect: connection refused" node="ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:43.777899 kubelet[2438]: E0416 23:57:43.777869 2438 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-3f94367fd3\" not found" node="ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:43.785913 systemd[1]: Created slice kubepods-burstable-podf9787872430400989d79b7f211f8573c.slice - libcontainer container kubepods-burstable-podf9787872430400989d79b7f211f8573c.slice. Apr 16 23:57:43.790039 kubelet[2438]: E0416 23:57:43.790012 2438 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-3f94367fd3\" not found" node="ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:43.793490 systemd[1]: Created slice kubepods-burstable-pod9514f271368628a7f40a1c49d6262c57.slice - libcontainer container kubepods-burstable-pod9514f271368628a7f40a1c49d6262c57.slice. Apr 16 23:57:43.797163 kubelet[2438]: E0416 23:57:43.797000 2438 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-3f94367fd3\" not found" node="ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:43.818608 kubelet[2438]: E0416 23:57:43.818541 2438 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://77.42.22.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-3f94367fd3?timeout=10s\": dial tcp 77.42.22.14:6443: connect: connection refused" interval="400ms" Apr 16 23:57:43.918796 kubelet[2438]: I0416 23:57:43.918686 2438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/489920b315fb52d002027ba533ed98f2-ca-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-3f94367fd3\" (UID: \"489920b315fb52d002027ba533ed98f2\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:43.918796 kubelet[2438]: I0416 23:57:43.918748 2438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/489920b315fb52d002027ba533ed98f2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-4-n-3f94367fd3\" (UID: \"489920b315fb52d002027ba533ed98f2\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:43.918796 kubelet[2438]: I0416 23:57:43.918761 2438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f9787872430400989d79b7f211f8573c-kubeconfig\") pod \"kube-scheduler-ci-4459-2-4-n-3f94367fd3\" (UID: \"f9787872430400989d79b7f211f8573c\") " pod="kube-system/kube-scheduler-ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:43.918796 kubelet[2438]: I0416 23:57:43.918776 2438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/489920b315fb52d002027ba533ed98f2-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-4-n-3f94367fd3\" (UID: \"489920b315fb52d002027ba533ed98f2\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:43.918796 kubelet[2438]: I0416 23:57:43.918787 2438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/489920b315fb52d002027ba533ed98f2-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-3f94367fd3\" (UID: \"489920b315fb52d002027ba533ed98f2\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:43.919187 kubelet[2438]: I0416 23:57:43.918797 2438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/489920b315fb52d002027ba533ed98f2-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-4-n-3f94367fd3\" (UID: \"489920b315fb52d002027ba533ed98f2\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:43.919187 kubelet[2438]: I0416 23:57:43.918808 2438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9514f271368628a7f40a1c49d6262c57-ca-certs\") pod \"kube-apiserver-ci-4459-2-4-n-3f94367fd3\" (UID: \"9514f271368628a7f40a1c49d6262c57\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:43.919187 kubelet[2438]: I0416 23:57:43.918819 2438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9514f271368628a7f40a1c49d6262c57-k8s-certs\") pod \"kube-apiserver-ci-4459-2-4-n-3f94367fd3\" (UID: \"9514f271368628a7f40a1c49d6262c57\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:43.919187 kubelet[2438]: I0416 23:57:43.918831 2438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9514f271368628a7f40a1c49d6262c57-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-4-n-3f94367fd3\" (UID: \"9514f271368628a7f40a1c49d6262c57\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:43.978617 kubelet[2438]: I0416 23:57:43.978563 2438 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:43.979312 kubelet[2438]: E0416 23:57:43.979258 2438 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://77.42.22.14:6443/api/v1/nodes\": dial tcp 77.42.22.14:6443: connect: connection refused" node="ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:44.080754 containerd[1631]: time="2026-04-16T23:57:44.080611682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-4-n-3f94367fd3,Uid:489920b315fb52d002027ba533ed98f2,Namespace:kube-system,Attempt:0,}" Apr 16 23:57:44.092477 containerd[1631]: time="2026-04-16T23:57:44.092408381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-4-n-3f94367fd3,Uid:f9787872430400989d79b7f211f8573c,Namespace:kube-system,Attempt:0,}" Apr 16 23:57:44.098537 containerd[1631]: time="2026-04-16T23:57:44.097971220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-4-n-3f94367fd3,Uid:9514f271368628a7f40a1c49d6262c57,Namespace:kube-system,Attempt:0,}" Apr 16 23:57:44.105606 containerd[1631]: time="2026-04-16T23:57:44.105533985Z" level=info msg="connecting to shim 4cb2f1bd8f5f5329406e48bc8fab202d1d8d2c92d56eeeb1b944070138069c91" address="unix:///run/containerd/s/b13b3d2cf5dcddcd1dfbe12c11dd037abf2fff376a96b247997f5dac556ff8e4" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:57:44.134329 containerd[1631]: time="2026-04-16T23:57:44.134269721Z" level=info msg="connecting to shim 659120b34560f50eed23b470d336079b50fdb5c68c09abed0edf27c7102a83a2" address="unix:///run/containerd/s/369fb9492deb570a866f0a30993a1e40b05186678ded7f96cd3ac98445cb73c5" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:57:44.140509 containerd[1631]: time="2026-04-16T23:57:44.140482186Z" level=info msg="connecting to shim 05b1fce90064c4523e4b24ce4756974667b9f912783baa7fc61d2daf8bf02e3e" address="unix:///run/containerd/s/8e82e3a1a7ce8e420653345306ee2b3eda7ecb76fa00e72e998b1661ce875aa0" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:57:44.160245 systemd[1]: Started cri-containerd-4cb2f1bd8f5f5329406e48bc8fab202d1d8d2c92d56eeeb1b944070138069c91.scope - libcontainer container 4cb2f1bd8f5f5329406e48bc8fab202d1d8d2c92d56eeeb1b944070138069c91. Apr 16 23:57:44.164241 systemd[1]: Started cri-containerd-659120b34560f50eed23b470d336079b50fdb5c68c09abed0edf27c7102a83a2.scope - libcontainer container 659120b34560f50eed23b470d336079b50fdb5c68c09abed0edf27c7102a83a2. Apr 16 23:57:44.174359 systemd[1]: Started cri-containerd-05b1fce90064c4523e4b24ce4756974667b9f912783baa7fc61d2daf8bf02e3e.scope - libcontainer container 05b1fce90064c4523e4b24ce4756974667b9f912783baa7fc61d2daf8bf02e3e. Apr 16 23:57:44.219082 containerd[1631]: time="2026-04-16T23:57:44.219009893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-4-n-3f94367fd3,Uid:9514f271368628a7f40a1c49d6262c57,Namespace:kube-system,Attempt:0,} returns sandbox id \"659120b34560f50eed23b470d336079b50fdb5c68c09abed0edf27c7102a83a2\"" Apr 16 23:57:44.221039 kubelet[2438]: E0416 23:57:44.220394 2438 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://77.42.22.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-3f94367fd3?timeout=10s\": dial tcp 77.42.22.14:6443: connect: connection refused" interval="800ms" Apr 16 23:57:44.223758 containerd[1631]: time="2026-04-16T23:57:44.223727870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-4-n-3f94367fd3,Uid:489920b315fb52d002027ba533ed98f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"4cb2f1bd8f5f5329406e48bc8fab202d1d8d2c92d56eeeb1b944070138069c91\"" Apr 16 23:57:44.225274 containerd[1631]: time="2026-04-16T23:57:44.225248061Z" level=info msg="CreateContainer within sandbox \"659120b34560f50eed23b470d336079b50fdb5c68c09abed0edf27c7102a83a2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 16 23:57:44.229554 containerd[1631]: time="2026-04-16T23:57:44.229298687Z" level=info msg="CreateContainer within sandbox \"4cb2f1bd8f5f5329406e48bc8fab202d1d8d2c92d56eeeb1b944070138069c91\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 16 23:57:44.237369 containerd[1631]: time="2026-04-16T23:57:44.237350854Z" level=info msg="Container e290328ba557ff986ab7a15107e7ddff95ee04153d3421a682cbd979fdf56f3b: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:57:44.238919 containerd[1631]: time="2026-04-16T23:57:44.238892009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-4-n-3f94367fd3,Uid:f9787872430400989d79b7f211f8573c,Namespace:kube-system,Attempt:0,} returns sandbox id \"05b1fce90064c4523e4b24ce4756974667b9f912783baa7fc61d2daf8bf02e3e\"" Apr 16 23:57:44.239490 containerd[1631]: time="2026-04-16T23:57:44.239404034Z" level=info msg="Container c2813e8c39716c82c86835a0b9cbf77da7215e46d0e519227c636e6f4c3c5b53: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:57:44.242931 containerd[1631]: time="2026-04-16T23:57:44.242896118Z" level=info msg="CreateContainer within sandbox \"05b1fce90064c4523e4b24ce4756974667b9f912783baa7fc61d2daf8bf02e3e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 16 23:57:44.243285 containerd[1631]: time="2026-04-16T23:57:44.243226385Z" level=info msg="CreateContainer within sandbox \"659120b34560f50eed23b470d336079b50fdb5c68c09abed0edf27c7102a83a2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e290328ba557ff986ab7a15107e7ddff95ee04153d3421a682cbd979fdf56f3b\"" Apr 16 23:57:44.243633 containerd[1631]: time="2026-04-16T23:57:44.243613535Z" level=info msg="StartContainer for \"e290328ba557ff986ab7a15107e7ddff95ee04153d3421a682cbd979fdf56f3b\"" Apr 16 23:57:44.244449 containerd[1631]: time="2026-04-16T23:57:44.244410960Z" level=info msg="connecting to shim e290328ba557ff986ab7a15107e7ddff95ee04153d3421a682cbd979fdf56f3b" address="unix:///run/containerd/s/369fb9492deb570a866f0a30993a1e40b05186678ded7f96cd3ac98445cb73c5" protocol=ttrpc version=3 Apr 16 23:57:44.247530 containerd[1631]: time="2026-04-16T23:57:44.247509785Z" level=info msg="CreateContainer within sandbox \"4cb2f1bd8f5f5329406e48bc8fab202d1d8d2c92d56eeeb1b944070138069c91\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c2813e8c39716c82c86835a0b9cbf77da7215e46d0e519227c636e6f4c3c5b53\"" Apr 16 23:57:44.247817 containerd[1631]: time="2026-04-16T23:57:44.247712918Z" level=info msg="StartContainer for \"c2813e8c39716c82c86835a0b9cbf77da7215e46d0e519227c636e6f4c3c5b53\"" Apr 16 23:57:44.248327 containerd[1631]: time="2026-04-16T23:57:44.248307780Z" level=info msg="connecting to shim c2813e8c39716c82c86835a0b9cbf77da7215e46d0e519227c636e6f4c3c5b53" address="unix:///run/containerd/s/b13b3d2cf5dcddcd1dfbe12c11dd037abf2fff376a96b247997f5dac556ff8e4" protocol=ttrpc version=3 Apr 16 23:57:44.255194 containerd[1631]: time="2026-04-16T23:57:44.255173071Z" level=info msg="Container 07d476e7b77515fbaba8754b02538963aaaf475c80d27f37346d2fefeeaacefa: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:57:44.264014 containerd[1631]: time="2026-04-16T23:57:44.263989332Z" level=info msg="CreateContainer within sandbox \"05b1fce90064c4523e4b24ce4756974667b9f912783baa7fc61d2daf8bf02e3e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"07d476e7b77515fbaba8754b02538963aaaf475c80d27f37346d2fefeeaacefa\"" Apr 16 23:57:44.264289 containerd[1631]: time="2026-04-16T23:57:44.264271092Z" level=info msg="StartContainer for \"07d476e7b77515fbaba8754b02538963aaaf475c80d27f37346d2fefeeaacefa\"" Apr 16 23:57:44.265093 containerd[1631]: time="2026-04-16T23:57:44.265061259Z" level=info msg="connecting to shim 07d476e7b77515fbaba8754b02538963aaaf475c80d27f37346d2fefeeaacefa" address="unix:///run/containerd/s/8e82e3a1a7ce8e420653345306ee2b3eda7ecb76fa00e72e998b1661ce875aa0" protocol=ttrpc version=3 Apr 16 23:57:44.268223 systemd[1]: Started cri-containerd-c2813e8c39716c82c86835a0b9cbf77da7215e46d0e519227c636e6f4c3c5b53.scope - libcontainer container c2813e8c39716c82c86835a0b9cbf77da7215e46d0e519227c636e6f4c3c5b53. Apr 16 23:57:44.274225 systemd[1]: Started cri-containerd-e290328ba557ff986ab7a15107e7ddff95ee04153d3421a682cbd979fdf56f3b.scope - libcontainer container e290328ba557ff986ab7a15107e7ddff95ee04153d3421a682cbd979fdf56f3b. Apr 16 23:57:44.294225 systemd[1]: Started cri-containerd-07d476e7b77515fbaba8754b02538963aaaf475c80d27f37346d2fefeeaacefa.scope - libcontainer container 07d476e7b77515fbaba8754b02538963aaaf475c80d27f37346d2fefeeaacefa. Apr 16 23:57:44.348507 containerd[1631]: time="2026-04-16T23:57:44.348365098Z" level=info msg="StartContainer for \"c2813e8c39716c82c86835a0b9cbf77da7215e46d0e519227c636e6f4c3c5b53\" returns successfully" Apr 16 23:57:44.355234 containerd[1631]: time="2026-04-16T23:57:44.355206446Z" level=info msg="StartContainer for \"07d476e7b77515fbaba8754b02538963aaaf475c80d27f37346d2fefeeaacefa\" returns successfully" Apr 16 23:57:44.357208 containerd[1631]: time="2026-04-16T23:57:44.357012006Z" level=info msg="StartContainer for \"e290328ba557ff986ab7a15107e7ddff95ee04153d3421a682cbd979fdf56f3b\" returns successfully" Apr 16 23:57:44.382455 kubelet[2438]: I0416 23:57:44.382425 2438 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:44.382791 kubelet[2438]: E0416 23:57:44.382671 2438 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://77.42.22.14:6443/api/v1/nodes\": dial tcp 77.42.22.14:6443: connect: connection refused" node="ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:44.651533 kubelet[2438]: E0416 23:57:44.651438 2438 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-3f94367fd3\" not found" node="ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:44.651943 kubelet[2438]: E0416 23:57:44.651673 2438 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-3f94367fd3\" not found" node="ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:44.655223 kubelet[2438]: E0416 23:57:44.655198 2438 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-3f94367fd3\" not found" node="ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:45.172502 kubelet[2438]: E0416 23:57:45.172445 2438 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459-2-4-n-3f94367fd3\" not found" node="ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:45.185233 kubelet[2438]: I0416 23:57:45.185154 2438 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:45.193060 kubelet[2438]: I0416 23:57:45.192906 2438 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:45.193060 kubelet[2438]: E0416 23:57:45.192927 2438 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4459-2-4-n-3f94367fd3\": node \"ci-4459-2-4-n-3f94367fd3\" not found" Apr 16 23:57:45.217975 kubelet[2438]: I0416 23:57:45.217849 2438 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:45.222055 kubelet[2438]: E0416 23:57:45.222025 2438 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-4-n-3f94367fd3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:45.222439 kubelet[2438]: I0416 23:57:45.222140 2438 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:45.223979 kubelet[2438]: E0416 23:57:45.223962 2438 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-4-n-3f94367fd3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:45.223979 kubelet[2438]: I0416 23:57:45.223978 2438 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:45.225419 kubelet[2438]: E0416 23:57:45.225402 2438 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-4-n-3f94367fd3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:45.595605 kubelet[2438]: I0416 23:57:45.595443 2438 apiserver.go:52] "Watching apiserver" Apr 16 23:57:45.618198 kubelet[2438]: I0416 23:57:45.618091 2438 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 16 23:57:45.655448 kubelet[2438]: I0416 23:57:45.655286 2438 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:45.655761 kubelet[2438]: I0416 23:57:45.655493 2438 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:45.658876 kubelet[2438]: E0416 23:57:45.658585 2438 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-4-n-3f94367fd3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:45.658876 kubelet[2438]: E0416 23:57:45.658621 2438 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-4-n-3f94367fd3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:47.276867 systemd[1]: Reload requested from client PID 2715 ('systemctl') (unit session-9.scope)... Apr 16 23:57:47.277262 systemd[1]: Reloading... Apr 16 23:57:47.355154 zram_generator::config[2759]: No configuration found. Apr 16 23:57:47.535381 systemd[1]: Reloading finished in 257 ms. Apr 16 23:57:47.563151 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:57:47.576256 systemd[1]: kubelet.service: Deactivated successfully. Apr 16 23:57:47.576517 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:57:47.576577 systemd[1]: kubelet.service: Consumed 643ms CPU time, 130.2M memory peak. Apr 16 23:57:47.578208 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:57:47.744625 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:57:47.753422 (kubelet)[2810]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 23:57:47.783226 kubelet[2810]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 23:57:47.783226 kubelet[2810]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 23:57:47.783226 kubelet[2810]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 23:57:47.783226 kubelet[2810]: I0416 23:57:47.783218 2810 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 23:57:47.790906 kubelet[2810]: I0416 23:57:47.790748 2810 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 16 23:57:47.790906 kubelet[2810]: I0416 23:57:47.790768 2810 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 23:57:47.792228 kubelet[2810]: I0416 23:57:47.791455 2810 server.go:956] "Client rotation is on, will bootstrap in background" Apr 16 23:57:47.792600 kubelet[2810]: I0416 23:57:47.792584 2810 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 16 23:57:47.794630 kubelet[2810]: I0416 23:57:47.794607 2810 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 23:57:47.799549 kubelet[2810]: I0416 23:57:47.799531 2810 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 16 23:57:47.804353 kubelet[2810]: I0416 23:57:47.804329 2810 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 16 23:57:47.805295 kubelet[2810]: I0416 23:57:47.805259 2810 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 23:57:47.805543 kubelet[2810]: I0416 23:57:47.805281 2810 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-4-n-3f94367fd3","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 23:57:47.805543 kubelet[2810]: I0416 23:57:47.805541 2810 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 23:57:47.805734 kubelet[2810]: I0416 23:57:47.805551 2810 container_manager_linux.go:303] "Creating device plugin manager" Apr 16 23:57:47.805734 kubelet[2810]: I0416 23:57:47.805599 2810 state_mem.go:36] "Initialized new in-memory state store" Apr 16 23:57:47.805770 kubelet[2810]: I0416 23:57:47.805764 2810 kubelet.go:480] "Attempting to sync node with API server" Apr 16 23:57:47.805770 kubelet[2810]: I0416 23:57:47.805773 2810 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 23:57:47.805817 kubelet[2810]: I0416 23:57:47.805802 2810 kubelet.go:386] "Adding apiserver pod source" Apr 16 23:57:47.805817 kubelet[2810]: I0416 23:57:47.805815 2810 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 23:57:47.809135 kubelet[2810]: I0416 23:57:47.809098 2810 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 16 23:57:47.809475 kubelet[2810]: I0416 23:57:47.809462 2810 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 23:57:47.813038 kubelet[2810]: I0416 23:57:47.811628 2810 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 16 23:57:47.813038 kubelet[2810]: I0416 23:57:47.811663 2810 server.go:1289] "Started kubelet" Apr 16 23:57:47.813038 kubelet[2810]: I0416 23:57:47.812867 2810 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 23:57:47.820292 kubelet[2810]: I0416 23:57:47.820210 2810 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 23:57:47.821785 kubelet[2810]: I0416 23:57:47.821075 2810 server.go:317] "Adding debug handlers to kubelet server" Apr 16 23:57:47.825367 kubelet[2810]: I0416 23:57:47.824973 2810 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 16 23:57:47.825673 kubelet[2810]: I0416 23:57:47.820336 2810 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 16 23:57:47.825859 kubelet[2810]: I0416 23:57:47.825746 2810 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 23:57:47.826422 kubelet[2810]: I0416 23:57:47.826403 2810 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 23:57:47.826614 kubelet[2810]: E0416 23:57:47.820414 2810 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-3f94367fd3\" not found" Apr 16 23:57:47.827128 kubelet[2810]: I0416 23:57:47.826885 2810 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 23:57:47.829832 kubelet[2810]: I0416 23:57:47.820329 2810 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 16 23:57:47.829931 kubelet[2810]: I0416 23:57:47.825909 2810 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 16 23:57:47.829974 kubelet[2810]: I0416 23:57:47.829967 2810 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 16 23:57:47.830008 kubelet[2810]: I0416 23:57:47.830002 2810 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 23:57:47.830033 kubelet[2810]: I0416 23:57:47.830028 2810 kubelet.go:2436] "Starting kubelet main sync loop" Apr 16 23:57:47.830093 kubelet[2810]: E0416 23:57:47.830084 2810 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 23:57:47.830470 kubelet[2810]: I0416 23:57:47.830460 2810 reconciler.go:26] "Reconciler: start to sync state" Apr 16 23:57:47.831347 kubelet[2810]: I0416 23:57:47.831320 2810 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 23:57:47.832958 kubelet[2810]: I0416 23:57:47.832944 2810 factory.go:223] Registration of the containerd container factory successfully Apr 16 23:57:47.832958 kubelet[2810]: I0416 23:57:47.832956 2810 factory.go:223] Registration of the systemd container factory successfully Apr 16 23:57:47.858622 kubelet[2810]: E0416 23:57:47.858596 2810 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 16 23:57:47.885323 kubelet[2810]: I0416 23:57:47.885296 2810 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 16 23:57:47.885323 kubelet[2810]: I0416 23:57:47.885312 2810 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 16 23:57:47.885323 kubelet[2810]: I0416 23:57:47.885327 2810 state_mem.go:36] "Initialized new in-memory state store" Apr 16 23:57:47.885492 kubelet[2810]: I0416 23:57:47.885427 2810 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 16 23:57:47.885492 kubelet[2810]: I0416 23:57:47.885433 2810 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 16 23:57:47.885492 kubelet[2810]: I0416 23:57:47.885447 2810 policy_none.go:49] "None policy: Start" Apr 16 23:57:47.885492 kubelet[2810]: I0416 23:57:47.885454 2810 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 16 23:57:47.885492 kubelet[2810]: I0416 23:57:47.885462 2810 state_mem.go:35] "Initializing new in-memory state store" Apr 16 23:57:47.885570 kubelet[2810]: I0416 23:57:47.885520 2810 state_mem.go:75] "Updated machine memory state" Apr 16 23:57:47.888900 kubelet[2810]: E0416 23:57:47.888881 2810 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 23:57:47.889018 kubelet[2810]: I0416 23:57:47.889005 2810 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 23:57:47.889038 kubelet[2810]: I0416 23:57:47.889015 2810 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 23:57:47.889379 kubelet[2810]: I0416 23:57:47.889369 2810 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 23:57:47.890940 kubelet[2810]: E0416 23:57:47.890925 2810 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 23:57:47.931708 kubelet[2810]: I0416 23:57:47.931540 2810 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:47.931708 kubelet[2810]: I0416 23:57:47.931609 2810 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:47.931708 kubelet[2810]: I0416 23:57:47.931543 2810 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:47.993280 kubelet[2810]: I0416 23:57:47.993246 2810 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:48.001835 kubelet[2810]: I0416 23:57:48.001781 2810 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:48.002001 kubelet[2810]: I0416 23:57:48.001962 2810 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:48.031719 kubelet[2810]: I0416 23:57:48.031664 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/489920b315fb52d002027ba533ed98f2-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-4-n-3f94367fd3\" (UID: \"489920b315fb52d002027ba533ed98f2\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:48.031719 kubelet[2810]: I0416 23:57:48.031712 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f9787872430400989d79b7f211f8573c-kubeconfig\") pod \"kube-scheduler-ci-4459-2-4-n-3f94367fd3\" (UID: \"f9787872430400989d79b7f211f8573c\") " pod="kube-system/kube-scheduler-ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:48.031918 kubelet[2810]: I0416 23:57:48.031738 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/489920b315fb52d002027ba533ed98f2-ca-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-3f94367fd3\" (UID: \"489920b315fb52d002027ba533ed98f2\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:48.031918 kubelet[2810]: I0416 23:57:48.031764 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/489920b315fb52d002027ba533ed98f2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-4-n-3f94367fd3\" (UID: \"489920b315fb52d002027ba533ed98f2\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:48.031918 kubelet[2810]: I0416 23:57:48.031801 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9514f271368628a7f40a1c49d6262c57-ca-certs\") pod \"kube-apiserver-ci-4459-2-4-n-3f94367fd3\" (UID: \"9514f271368628a7f40a1c49d6262c57\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:48.031918 kubelet[2810]: I0416 23:57:48.031826 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9514f271368628a7f40a1c49d6262c57-k8s-certs\") pod \"kube-apiserver-ci-4459-2-4-n-3f94367fd3\" (UID: \"9514f271368628a7f40a1c49d6262c57\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:48.031918 kubelet[2810]: I0416 23:57:48.031850 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9514f271368628a7f40a1c49d6262c57-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-4-n-3f94367fd3\" (UID: \"9514f271368628a7f40a1c49d6262c57\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:48.032168 kubelet[2810]: I0416 23:57:48.031873 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/489920b315fb52d002027ba533ed98f2-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-4-n-3f94367fd3\" (UID: \"489920b315fb52d002027ba533ed98f2\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:48.032168 kubelet[2810]: I0416 23:57:48.031894 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/489920b315fb52d002027ba533ed98f2-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-3f94367fd3\" (UID: \"489920b315fb52d002027ba533ed98f2\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:48.281548 sudo[2846]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 16 23:57:48.283008 sudo[2846]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 16 23:57:48.548318 sudo[2846]: pam_unix(sudo:session): session closed for user root Apr 16 23:57:48.808694 kubelet[2810]: I0416 23:57:48.808558 2810 apiserver.go:52] "Watching apiserver" Apr 16 23:57:48.827133 kubelet[2810]: I0416 23:57:48.826784 2810 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 16 23:57:48.872310 kubelet[2810]: I0416 23:57:48.872236 2810 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:48.873106 kubelet[2810]: I0416 23:57:48.873094 2810 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:48.877752 kubelet[2810]: E0416 23:57:48.877724 2810 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-4-n-3f94367fd3\" already exists" pod="kube-system/kube-scheduler-ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:48.879852 kubelet[2810]: E0416 23:57:48.879741 2810 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-4-n-3f94367fd3\" already exists" pod="kube-system/kube-apiserver-ci-4459-2-4-n-3f94367fd3" Apr 16 23:57:48.892626 kubelet[2810]: I0416 23:57:48.892577 2810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-2-4-n-3f94367fd3" podStartSLOduration=1.892564428 podStartE2EDuration="1.892564428s" podCreationTimestamp="2026-04-16 23:57:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:57:48.892550541 +0000 UTC m=+1.134925146" watchObservedRunningTime="2026-04-16 23:57:48.892564428 +0000 UTC m=+1.134939033" Apr 16 23:57:48.909638 kubelet[2810]: I0416 23:57:48.909323 2810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-3f94367fd3" podStartSLOduration=1.909309197 podStartE2EDuration="1.909309197s" podCreationTimestamp="2026-04-16 23:57:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:57:48.908287116 +0000 UTC m=+1.150661721" watchObservedRunningTime="2026-04-16 23:57:48.909309197 +0000 UTC m=+1.151683812" Apr 16 23:57:48.909638 kubelet[2810]: I0416 23:57:48.909386 2810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-2-4-n-3f94367fd3" podStartSLOduration=1.909383485 podStartE2EDuration="1.909383485s" podCreationTimestamp="2026-04-16 23:57:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:57:48.901347695 +0000 UTC m=+1.143722310" watchObservedRunningTime="2026-04-16 23:57:48.909383485 +0000 UTC m=+1.151758110" Apr 16 23:57:49.706769 sudo[1880]: pam_unix(sudo:session): session closed for user root Apr 16 23:57:49.736663 sshd[1879]: Connection closed by 4.175.71.9 port 57938 Apr 16 23:57:49.737345 sshd-session[1876]: pam_unix(sshd:session): session closed for user core Apr 16 23:57:49.743025 systemd[1]: sshd@8-77.42.22.14:22-4.175.71.9:57938.service: Deactivated successfully. Apr 16 23:57:49.746023 systemd[1]: session-9.scope: Deactivated successfully. Apr 16 23:57:49.746362 systemd[1]: session-9.scope: Consumed 3.693s CPU time, 273.7M memory peak. Apr 16 23:57:49.747562 systemd-logind[1606]: Session 9 logged out. Waiting for processes to exit. Apr 16 23:57:49.748921 systemd-logind[1606]: Removed session 9. Apr 16 23:57:53.824156 kubelet[2810]: I0416 23:57:53.824088 2810 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 16 23:57:53.825301 kubelet[2810]: I0416 23:57:53.825105 2810 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 16 23:57:53.825360 containerd[1631]: time="2026-04-16T23:57:53.824748242Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 16 23:57:54.642301 systemd[1]: Created slice kubepods-besteffort-podf30135ab_7daf_4eea_a83d_ee340f67cfb0.slice - libcontainer container kubepods-besteffort-podf30135ab_7daf_4eea_a83d_ee340f67cfb0.slice. Apr 16 23:57:54.661942 systemd[1]: Created slice kubepods-burstable-pod6665b3da_59b1_4822_b674_5cad17d5fa80.slice - libcontainer container kubepods-burstable-pod6665b3da_59b1_4822_b674_5cad17d5fa80.slice. Apr 16 23:57:54.678013 kubelet[2810]: I0416 23:57:54.677977 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f30135ab-7daf-4eea-a83d-ee340f67cfb0-kube-proxy\") pod \"kube-proxy-w9njr\" (UID: \"f30135ab-7daf-4eea-a83d-ee340f67cfb0\") " pod="kube-system/kube-proxy-w9njr" Apr 16 23:57:54.678013 kubelet[2810]: I0416 23:57:54.678006 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f30135ab-7daf-4eea-a83d-ee340f67cfb0-xtables-lock\") pod \"kube-proxy-w9njr\" (UID: \"f30135ab-7daf-4eea-a83d-ee340f67cfb0\") " pod="kube-system/kube-proxy-w9njr" Apr 16 23:57:54.678013 kubelet[2810]: I0416 23:57:54.678018 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f30135ab-7daf-4eea-a83d-ee340f67cfb0-lib-modules\") pod \"kube-proxy-w9njr\" (UID: \"f30135ab-7daf-4eea-a83d-ee340f67cfb0\") " pod="kube-system/kube-proxy-w9njr" Apr 16 23:57:54.678371 kubelet[2810]: I0416 23:57:54.678031 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkxnh\" (UniqueName: \"kubernetes.io/projected/f30135ab-7daf-4eea-a83d-ee340f67cfb0-kube-api-access-gkxnh\") pod \"kube-proxy-w9njr\" (UID: \"f30135ab-7daf-4eea-a83d-ee340f67cfb0\") " pod="kube-system/kube-proxy-w9njr" Apr 16 23:57:54.678371 kubelet[2810]: I0416 23:57:54.678044 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6665b3da-59b1-4822-b674-5cad17d5fa80-cilium-cgroup\") pod \"cilium-qczzs\" (UID: \"6665b3da-59b1-4822-b674-5cad17d5fa80\") " pod="kube-system/cilium-qczzs" Apr 16 23:57:54.678371 kubelet[2810]: I0416 23:57:54.678055 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6665b3da-59b1-4822-b674-5cad17d5fa80-cni-path\") pod \"cilium-qczzs\" (UID: \"6665b3da-59b1-4822-b674-5cad17d5fa80\") " pod="kube-system/cilium-qczzs" Apr 16 23:57:54.678371 kubelet[2810]: I0416 23:57:54.678064 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6665b3da-59b1-4822-b674-5cad17d5fa80-lib-modules\") pod \"cilium-qczzs\" (UID: \"6665b3da-59b1-4822-b674-5cad17d5fa80\") " pod="kube-system/cilium-qczzs" Apr 16 23:57:54.678371 kubelet[2810]: I0416 23:57:54.678075 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6665b3da-59b1-4822-b674-5cad17d5fa80-clustermesh-secrets\") pod \"cilium-qczzs\" (UID: \"6665b3da-59b1-4822-b674-5cad17d5fa80\") " pod="kube-system/cilium-qczzs" Apr 16 23:57:54.678459 kubelet[2810]: I0416 23:57:54.678087 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6665b3da-59b1-4822-b674-5cad17d5fa80-cilium-config-path\") pod \"cilium-qczzs\" (UID: \"6665b3da-59b1-4822-b674-5cad17d5fa80\") " pod="kube-system/cilium-qczzs" Apr 16 23:57:54.678459 kubelet[2810]: I0416 23:57:54.678100 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6665b3da-59b1-4822-b674-5cad17d5fa80-hostproc\") pod \"cilium-qczzs\" (UID: \"6665b3da-59b1-4822-b674-5cad17d5fa80\") " pod="kube-system/cilium-qczzs" Apr 16 23:57:54.678459 kubelet[2810]: I0416 23:57:54.678252 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6665b3da-59b1-4822-b674-5cad17d5fa80-etc-cni-netd\") pod \"cilium-qczzs\" (UID: \"6665b3da-59b1-4822-b674-5cad17d5fa80\") " pod="kube-system/cilium-qczzs" Apr 16 23:57:54.678459 kubelet[2810]: I0416 23:57:54.678274 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6665b3da-59b1-4822-b674-5cad17d5fa80-hubble-tls\") pod \"cilium-qczzs\" (UID: \"6665b3da-59b1-4822-b674-5cad17d5fa80\") " pod="kube-system/cilium-qczzs" Apr 16 23:57:54.678459 kubelet[2810]: I0416 23:57:54.678284 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxwqb\" (UniqueName: \"kubernetes.io/projected/6665b3da-59b1-4822-b674-5cad17d5fa80-kube-api-access-lxwqb\") pod \"cilium-qczzs\" (UID: \"6665b3da-59b1-4822-b674-5cad17d5fa80\") " pod="kube-system/cilium-qczzs" Apr 16 23:57:54.678550 kubelet[2810]: I0416 23:57:54.678294 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6665b3da-59b1-4822-b674-5cad17d5fa80-cilium-run\") pod \"cilium-qczzs\" (UID: \"6665b3da-59b1-4822-b674-5cad17d5fa80\") " pod="kube-system/cilium-qczzs" Apr 16 23:57:54.678550 kubelet[2810]: I0416 23:57:54.678490 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6665b3da-59b1-4822-b674-5cad17d5fa80-bpf-maps\") pod \"cilium-qczzs\" (UID: \"6665b3da-59b1-4822-b674-5cad17d5fa80\") " pod="kube-system/cilium-qczzs" Apr 16 23:57:54.678550 kubelet[2810]: I0416 23:57:54.678502 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6665b3da-59b1-4822-b674-5cad17d5fa80-xtables-lock\") pod \"cilium-qczzs\" (UID: \"6665b3da-59b1-4822-b674-5cad17d5fa80\") " pod="kube-system/cilium-qczzs" Apr 16 23:57:54.678603 kubelet[2810]: I0416 23:57:54.678555 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6665b3da-59b1-4822-b674-5cad17d5fa80-host-proc-sys-net\") pod \"cilium-qczzs\" (UID: \"6665b3da-59b1-4822-b674-5cad17d5fa80\") " pod="kube-system/cilium-qczzs" Apr 16 23:57:54.678603 kubelet[2810]: I0416 23:57:54.678567 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6665b3da-59b1-4822-b674-5cad17d5fa80-host-proc-sys-kernel\") pod \"cilium-qczzs\" (UID: \"6665b3da-59b1-4822-b674-5cad17d5fa80\") " pod="kube-system/cilium-qczzs" Apr 16 23:57:54.790175 kubelet[2810]: E0416 23:57:54.789584 2810 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 16 23:57:54.793148 kubelet[2810]: E0416 23:57:54.791181 2810 projected.go:194] Error preparing data for projected volume kube-api-access-gkxnh for pod kube-system/kube-proxy-w9njr: configmap "kube-root-ca.crt" not found Apr 16 23:57:54.793148 kubelet[2810]: E0416 23:57:54.791301 2810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f30135ab-7daf-4eea-a83d-ee340f67cfb0-kube-api-access-gkxnh podName:f30135ab-7daf-4eea-a83d-ee340f67cfb0 nodeName:}" failed. No retries permitted until 2026-04-16 23:57:55.291256132 +0000 UTC m=+7.533630777 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gkxnh" (UniqueName: "kubernetes.io/projected/f30135ab-7daf-4eea-a83d-ee340f67cfb0-kube-api-access-gkxnh") pod "kube-proxy-w9njr" (UID: "f30135ab-7daf-4eea-a83d-ee340f67cfb0") : configmap "kube-root-ca.crt" not found Apr 16 23:57:54.794223 kubelet[2810]: E0416 23:57:54.794198 2810 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 16 23:57:54.794313 kubelet[2810]: E0416 23:57:54.794300 2810 projected.go:194] Error preparing data for projected volume kube-api-access-lxwqb for pod kube-system/cilium-qczzs: configmap "kube-root-ca.crt" not found Apr 16 23:57:54.794406 kubelet[2810]: E0416 23:57:54.794394 2810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6665b3da-59b1-4822-b674-5cad17d5fa80-kube-api-access-lxwqb podName:6665b3da-59b1-4822-b674-5cad17d5fa80 nodeName:}" failed. No retries permitted until 2026-04-16 23:57:55.294378707 +0000 UTC m=+7.536753332 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lxwqb" (UniqueName: "kubernetes.io/projected/6665b3da-59b1-4822-b674-5cad17d5fa80-kube-api-access-lxwqb") pod "cilium-qczzs" (UID: "6665b3da-59b1-4822-b674-5cad17d5fa80") : configmap "kube-root-ca.crt" not found Apr 16 23:57:55.016099 systemd[1]: Created slice kubepods-besteffort-pod94324002_ccbd_4870_8023_0b10f455c0e6.slice - libcontainer container kubepods-besteffort-pod94324002_ccbd_4870_8023_0b10f455c0e6.slice. Apr 16 23:57:55.080692 kubelet[2810]: I0416 23:57:55.080603 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/94324002-ccbd-4870-8023-0b10f455c0e6-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-btzm9\" (UID: \"94324002-ccbd-4870-8023-0b10f455c0e6\") " pod="kube-system/cilium-operator-6c4d7847fc-btzm9" Apr 16 23:57:55.080692 kubelet[2810]: I0416 23:57:55.080656 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4kt9\" (UniqueName: \"kubernetes.io/projected/94324002-ccbd-4870-8023-0b10f455c0e6-kube-api-access-w4kt9\") pod \"cilium-operator-6c4d7847fc-btzm9\" (UID: \"94324002-ccbd-4870-8023-0b10f455c0e6\") " pod="kube-system/cilium-operator-6c4d7847fc-btzm9" Apr 16 23:57:55.319601 containerd[1631]: time="2026-04-16T23:57:55.319358260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-btzm9,Uid:94324002-ccbd-4870-8023-0b10f455c0e6,Namespace:kube-system,Attempt:0,}" Apr 16 23:57:55.354670 containerd[1631]: time="2026-04-16T23:57:55.353616327Z" level=info msg="connecting to shim 71a040ad5a56a6f893437f67b5ec47090ddcf52a435e388c7a47962efa3753af" address="unix:///run/containerd/s/fc474753c2e60c21531a904e2d6d9bafc66069080bd123abc0aa0b8f418ff9c3" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:57:55.394560 systemd[1]: Started cri-containerd-71a040ad5a56a6f893437f67b5ec47090ddcf52a435e388c7a47962efa3753af.scope - libcontainer container 71a040ad5a56a6f893437f67b5ec47090ddcf52a435e388c7a47962efa3753af. Apr 16 23:57:55.443356 containerd[1631]: time="2026-04-16T23:57:55.443316926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-btzm9,Uid:94324002-ccbd-4870-8023-0b10f455c0e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"71a040ad5a56a6f893437f67b5ec47090ddcf52a435e388c7a47962efa3753af\"" Apr 16 23:57:55.445075 containerd[1631]: time="2026-04-16T23:57:55.445037338Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 16 23:57:55.551874 containerd[1631]: time="2026-04-16T23:57:55.551741711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w9njr,Uid:f30135ab-7daf-4eea-a83d-ee340f67cfb0,Namespace:kube-system,Attempt:0,}" Apr 16 23:57:55.568612 containerd[1631]: time="2026-04-16T23:57:55.568496997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qczzs,Uid:6665b3da-59b1-4822-b674-5cad17d5fa80,Namespace:kube-system,Attempt:0,}" Apr 16 23:57:55.591347 containerd[1631]: time="2026-04-16T23:57:55.591103736Z" level=info msg="connecting to shim 244ff72bc81a2fae8a68d721e1094ec1f74f44df66d56be4331d7da94d03010d" address="unix:///run/containerd/s/41469e8042e30f636f54219a9cb8f56dc63416cf5f5f3725db577b329f33f8d1" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:57:55.598183 containerd[1631]: time="2026-04-16T23:57:55.598081845Z" level=info msg="connecting to shim cad2566d1dabb86801d5361fa53d42c485141d0c1dedff3047a278f1fbf87ada" address="unix:///run/containerd/s/c5ab294174418f4162721ff8c234249b739fd2a20ffce1ce746b058777c0d8cf" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:57:55.634238 systemd[1]: Started cri-containerd-244ff72bc81a2fae8a68d721e1094ec1f74f44df66d56be4331d7da94d03010d.scope - libcontainer container 244ff72bc81a2fae8a68d721e1094ec1f74f44df66d56be4331d7da94d03010d. Apr 16 23:57:55.638132 systemd[1]: Started cri-containerd-cad2566d1dabb86801d5361fa53d42c485141d0c1dedff3047a278f1fbf87ada.scope - libcontainer container cad2566d1dabb86801d5361fa53d42c485141d0c1dedff3047a278f1fbf87ada. Apr 16 23:57:55.669459 containerd[1631]: time="2026-04-16T23:57:55.669400856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qczzs,Uid:6665b3da-59b1-4822-b674-5cad17d5fa80,Namespace:kube-system,Attempt:0,} returns sandbox id \"cad2566d1dabb86801d5361fa53d42c485141d0c1dedff3047a278f1fbf87ada\"" Apr 16 23:57:55.670736 containerd[1631]: time="2026-04-16T23:57:55.670697680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w9njr,Uid:f30135ab-7daf-4eea-a83d-ee340f67cfb0,Namespace:kube-system,Attempt:0,} returns sandbox id \"244ff72bc81a2fae8a68d721e1094ec1f74f44df66d56be4331d7da94d03010d\"" Apr 16 23:57:55.676326 containerd[1631]: time="2026-04-16T23:57:55.676300355Z" level=info msg="CreateContainer within sandbox \"244ff72bc81a2fae8a68d721e1094ec1f74f44df66d56be4331d7da94d03010d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 16 23:57:55.685483 containerd[1631]: time="2026-04-16T23:57:55.685445165Z" level=info msg="Container d464a2495d2fbcbb315a752d20f2e668425b39375ddc45897abf554a48ee1cfc: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:57:55.691666 containerd[1631]: time="2026-04-16T23:57:55.691621177Z" level=info msg="CreateContainer within sandbox \"244ff72bc81a2fae8a68d721e1094ec1f74f44df66d56be4331d7da94d03010d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d464a2495d2fbcbb315a752d20f2e668425b39375ddc45897abf554a48ee1cfc\"" Apr 16 23:57:55.692221 containerd[1631]: time="2026-04-16T23:57:55.692205714Z" level=info msg="StartContainer for \"d464a2495d2fbcbb315a752d20f2e668425b39375ddc45897abf554a48ee1cfc\"" Apr 16 23:57:55.693279 containerd[1631]: time="2026-04-16T23:57:55.693239400Z" level=info msg="connecting to shim d464a2495d2fbcbb315a752d20f2e668425b39375ddc45897abf554a48ee1cfc" address="unix:///run/containerd/s/41469e8042e30f636f54219a9cb8f56dc63416cf5f5f3725db577b329f33f8d1" protocol=ttrpc version=3 Apr 16 23:57:55.712246 systemd[1]: Started cri-containerd-d464a2495d2fbcbb315a752d20f2e668425b39375ddc45897abf554a48ee1cfc.scope - libcontainer container d464a2495d2fbcbb315a752d20f2e668425b39375ddc45897abf554a48ee1cfc. Apr 16 23:57:55.771214 containerd[1631]: time="2026-04-16T23:57:55.771166852Z" level=info msg="StartContainer for \"d464a2495d2fbcbb315a752d20f2e668425b39375ddc45897abf554a48ee1cfc\" returns successfully" Apr 16 23:57:57.189349 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount163425609.mount: Deactivated successfully. Apr 16 23:57:57.643870 containerd[1631]: time="2026-04-16T23:57:57.643762880Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:57:57.644873 containerd[1631]: time="2026-04-16T23:57:57.644852146Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 16 23:57:57.645768 containerd[1631]: time="2026-04-16T23:57:57.645732062Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:57:57.646739 containerd[1631]: time="2026-04-16T23:57:57.646442070Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.201383812s" Apr 16 23:57:57.646739 containerd[1631]: time="2026-04-16T23:57:57.646464559Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 16 23:57:57.647203 containerd[1631]: time="2026-04-16T23:57:57.647190337Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 16 23:57:57.649853 containerd[1631]: time="2026-04-16T23:57:57.649832236Z" level=info msg="CreateContainer within sandbox \"71a040ad5a56a6f893437f67b5ec47090ddcf52a435e388c7a47962efa3753af\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 16 23:57:57.659145 containerd[1631]: time="2026-04-16T23:57:57.657074967Z" level=info msg="Container fb40f21b6fc0d21348a675b407f69b743f67bf28fd1a67be8d070ae4d1f1fdde: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:57:57.672806 containerd[1631]: time="2026-04-16T23:57:57.672764694Z" level=info msg="CreateContainer within sandbox \"71a040ad5a56a6f893437f67b5ec47090ddcf52a435e388c7a47962efa3753af\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"fb40f21b6fc0d21348a675b407f69b743f67bf28fd1a67be8d070ae4d1f1fdde\"" Apr 16 23:57:57.674177 containerd[1631]: time="2026-04-16T23:57:57.674154598Z" level=info msg="StartContainer for \"fb40f21b6fc0d21348a675b407f69b743f67bf28fd1a67be8d070ae4d1f1fdde\"" Apr 16 23:57:57.674752 containerd[1631]: time="2026-04-16T23:57:57.674736516Z" level=info msg="connecting to shim fb40f21b6fc0d21348a675b407f69b743f67bf28fd1a67be8d070ae4d1f1fdde" address="unix:///run/containerd/s/fc474753c2e60c21531a904e2d6d9bafc66069080bd123abc0aa0b8f418ff9c3" protocol=ttrpc version=3 Apr 16 23:57:57.695227 systemd[1]: Started cri-containerd-fb40f21b6fc0d21348a675b407f69b743f67bf28fd1a67be8d070ae4d1f1fdde.scope - libcontainer container fb40f21b6fc0d21348a675b407f69b743f67bf28fd1a67be8d070ae4d1f1fdde. Apr 16 23:57:57.721217 containerd[1631]: time="2026-04-16T23:57:57.721190200Z" level=info msg="StartContainer for \"fb40f21b6fc0d21348a675b407f69b743f67bf28fd1a67be8d070ae4d1f1fdde\" returns successfully" Apr 16 23:57:57.908183 kubelet[2810]: I0416 23:57:57.907965 2810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-w9njr" podStartSLOduration=3.907952682 podStartE2EDuration="3.907952682s" podCreationTimestamp="2026-04-16 23:57:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:57:55.901029372 +0000 UTC m=+8.143404007" watchObservedRunningTime="2026-04-16 23:57:57.907952682 +0000 UTC m=+10.150327287" Apr 16 23:57:58.996185 kubelet[2810]: I0416 23:57:58.995995 2810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-btzm9" podStartSLOduration=2.793488146 podStartE2EDuration="4.995976593s" podCreationTimestamp="2026-04-16 23:57:54 +0000 UTC" firstStartedPulling="2026-04-16 23:57:55.44460333 +0000 UTC m=+7.686977945" lastFinishedPulling="2026-04-16 23:57:57.647091787 +0000 UTC m=+9.889466392" observedRunningTime="2026-04-16 23:57:57.908982388 +0000 UTC m=+10.151356993" watchObservedRunningTime="2026-04-16 23:57:58.995976593 +0000 UTC m=+11.238351238" Apr 16 23:58:02.301475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3947736278.mount: Deactivated successfully. Apr 16 23:58:02.719207 update_engine[1607]: I20260416 23:58:02.719158 1607 update_attempter.cc:509] Updating boot flags... Apr 16 23:58:03.823038 containerd[1631]: time="2026-04-16T23:58:03.822986572Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:58:03.824152 containerd[1631]: time="2026-04-16T23:58:03.824010719Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 16 23:58:03.825067 containerd[1631]: time="2026-04-16T23:58:03.825028956Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:58:03.826130 containerd[1631]: time="2026-04-16T23:58:03.825950313Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 6.178685297s" Apr 16 23:58:03.826130 containerd[1631]: time="2026-04-16T23:58:03.825985873Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 16 23:58:03.830227 containerd[1631]: time="2026-04-16T23:58:03.830179721Z" level=info msg="CreateContainer within sandbox \"cad2566d1dabb86801d5361fa53d42c485141d0c1dedff3047a278f1fbf87ada\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 16 23:58:03.837860 containerd[1631]: time="2026-04-16T23:58:03.837429550Z" level=info msg="Container f5283733a8b7edef763d6fa51a9eaee7be0f6de0c67c4eca4e632eafa0c0e59a: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:58:03.843400 containerd[1631]: time="2026-04-16T23:58:03.843377092Z" level=info msg="CreateContainer within sandbox \"cad2566d1dabb86801d5361fa53d42c485141d0c1dedff3047a278f1fbf87ada\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f5283733a8b7edef763d6fa51a9eaee7be0f6de0c67c4eca4e632eafa0c0e59a\"" Apr 16 23:58:03.844665 containerd[1631]: time="2026-04-16T23:58:03.844645609Z" level=info msg="StartContainer for \"f5283733a8b7edef763d6fa51a9eaee7be0f6de0c67c4eca4e632eafa0c0e59a\"" Apr 16 23:58:03.845585 containerd[1631]: time="2026-04-16T23:58:03.845555206Z" level=info msg="connecting to shim f5283733a8b7edef763d6fa51a9eaee7be0f6de0c67c4eca4e632eafa0c0e59a" address="unix:///run/containerd/s/c5ab294174418f4162721ff8c234249b739fd2a20ffce1ce746b058777c0d8cf" protocol=ttrpc version=3 Apr 16 23:58:03.866255 systemd[1]: Started cri-containerd-f5283733a8b7edef763d6fa51a9eaee7be0f6de0c67c4eca4e632eafa0c0e59a.scope - libcontainer container f5283733a8b7edef763d6fa51a9eaee7be0f6de0c67c4eca4e632eafa0c0e59a. Apr 16 23:58:03.895521 containerd[1631]: time="2026-04-16T23:58:03.895481380Z" level=info msg="StartContainer for \"f5283733a8b7edef763d6fa51a9eaee7be0f6de0c67c4eca4e632eafa0c0e59a\" returns successfully" Apr 16 23:58:03.902588 systemd[1]: cri-containerd-f5283733a8b7edef763d6fa51a9eaee7be0f6de0c67c4eca4e632eafa0c0e59a.scope: Deactivated successfully. Apr 16 23:58:03.904868 containerd[1631]: time="2026-04-16T23:58:03.904822262Z" level=info msg="received container exit event container_id:\"f5283733a8b7edef763d6fa51a9eaee7be0f6de0c67c4eca4e632eafa0c0e59a\" id:\"f5283733a8b7edef763d6fa51a9eaee7be0f6de0c67c4eca4e632eafa0c0e59a\" pid:3301 exited_at:{seconds:1776383883 nanos:904321344}" Apr 16 23:58:03.928820 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f5283733a8b7edef763d6fa51a9eaee7be0f6de0c67c4eca4e632eafa0c0e59a-rootfs.mount: Deactivated successfully. Apr 16 23:58:04.926614 containerd[1631]: time="2026-04-16T23:58:04.926524570Z" level=info msg="CreateContainer within sandbox \"cad2566d1dabb86801d5361fa53d42c485141d0c1dedff3047a278f1fbf87ada\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 16 23:58:04.942227 containerd[1631]: time="2026-04-16T23:58:04.940093482Z" level=info msg="Container 95812d222104ab8a15c703c00f997b2cebc7852d49c1582cb2a01f147070aa8e: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:58:04.962715 containerd[1631]: time="2026-04-16T23:58:04.962643769Z" level=info msg="CreateContainer within sandbox \"cad2566d1dabb86801d5361fa53d42c485141d0c1dedff3047a278f1fbf87ada\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"95812d222104ab8a15c703c00f997b2cebc7852d49c1582cb2a01f147070aa8e\"" Apr 16 23:58:04.963340 containerd[1631]: time="2026-04-16T23:58:04.963315197Z" level=info msg="StartContainer for \"95812d222104ab8a15c703c00f997b2cebc7852d49c1582cb2a01f147070aa8e\"" Apr 16 23:58:04.964892 containerd[1631]: time="2026-04-16T23:58:04.964762473Z" level=info msg="connecting to shim 95812d222104ab8a15c703c00f997b2cebc7852d49c1582cb2a01f147070aa8e" address="unix:///run/containerd/s/c5ab294174418f4162721ff8c234249b739fd2a20ffce1ce746b058777c0d8cf" protocol=ttrpc version=3 Apr 16 23:58:04.995217 systemd[1]: Started cri-containerd-95812d222104ab8a15c703c00f997b2cebc7852d49c1582cb2a01f147070aa8e.scope - libcontainer container 95812d222104ab8a15c703c00f997b2cebc7852d49c1582cb2a01f147070aa8e. Apr 16 23:58:05.032689 containerd[1631]: time="2026-04-16T23:58:05.032636738Z" level=info msg="StartContainer for \"95812d222104ab8a15c703c00f997b2cebc7852d49c1582cb2a01f147070aa8e\" returns successfully" Apr 16 23:58:05.047473 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 16 23:58:05.047635 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 16 23:58:05.049424 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 16 23:58:05.051923 containerd[1631]: time="2026-04-16T23:58:05.051715567Z" level=info msg="received container exit event container_id:\"95812d222104ab8a15c703c00f997b2cebc7852d49c1582cb2a01f147070aa8e\" id:\"95812d222104ab8a15c703c00f997b2cebc7852d49c1582cb2a01f147070aa8e\" pid:3345 exited_at:{seconds:1776383885 nanos:50880710}" Apr 16 23:58:05.052277 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 23:58:05.054482 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 16 23:58:05.054946 systemd[1]: cri-containerd-95812d222104ab8a15c703c00f997b2cebc7852d49c1582cb2a01f147070aa8e.scope: Deactivated successfully. Apr 16 23:58:05.075334 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 23:58:05.938477 containerd[1631]: time="2026-04-16T23:58:05.938389813Z" level=info msg="CreateContainer within sandbox \"cad2566d1dabb86801d5361fa53d42c485141d0c1dedff3047a278f1fbf87ada\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 16 23:58:05.955439 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-95812d222104ab8a15c703c00f997b2cebc7852d49c1582cb2a01f147070aa8e-rootfs.mount: Deactivated successfully. Apr 16 23:58:05.978753 containerd[1631]: time="2026-04-16T23:58:05.978274097Z" level=info msg="Container 8b08c3198347dcf3b3aca9b30baeb809599844b8b7f4b663ca814978b051941f: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:58:05.983219 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2591266973.mount: Deactivated successfully. Apr 16 23:58:05.988873 containerd[1631]: time="2026-04-16T23:58:05.988837129Z" level=info msg="CreateContainer within sandbox \"cad2566d1dabb86801d5361fa53d42c485141d0c1dedff3047a278f1fbf87ada\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8b08c3198347dcf3b3aca9b30baeb809599844b8b7f4b663ca814978b051941f\"" Apr 16 23:58:05.989488 containerd[1631]: time="2026-04-16T23:58:05.989457037Z" level=info msg="StartContainer for \"8b08c3198347dcf3b3aca9b30baeb809599844b8b7f4b663ca814978b051941f\"" Apr 16 23:58:05.990989 containerd[1631]: time="2026-04-16T23:58:05.990892913Z" level=info msg="connecting to shim 8b08c3198347dcf3b3aca9b30baeb809599844b8b7f4b663ca814978b051941f" address="unix:///run/containerd/s/c5ab294174418f4162721ff8c234249b739fd2a20ffce1ce746b058777c0d8cf" protocol=ttrpc version=3 Apr 16 23:58:06.011260 systemd[1]: Started cri-containerd-8b08c3198347dcf3b3aca9b30baeb809599844b8b7f4b663ca814978b051941f.scope - libcontainer container 8b08c3198347dcf3b3aca9b30baeb809599844b8b7f4b663ca814978b051941f. Apr 16 23:58:06.091731 containerd[1631]: time="2026-04-16T23:58:06.091668187Z" level=info msg="StartContainer for \"8b08c3198347dcf3b3aca9b30baeb809599844b8b7f4b663ca814978b051941f\" returns successfully" Apr 16 23:58:06.093155 systemd[1]: cri-containerd-8b08c3198347dcf3b3aca9b30baeb809599844b8b7f4b663ca814978b051941f.scope: Deactivated successfully. Apr 16 23:58:06.094859 containerd[1631]: time="2026-04-16T23:58:06.094826839Z" level=info msg="received container exit event container_id:\"8b08c3198347dcf3b3aca9b30baeb809599844b8b7f4b663ca814978b051941f\" id:\"8b08c3198347dcf3b3aca9b30baeb809599844b8b7f4b663ca814978b051941f\" pid:3391 exited_at:{seconds:1776383886 nanos:94066001}" Apr 16 23:58:06.114080 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b08c3198347dcf3b3aca9b30baeb809599844b8b7f4b663ca814978b051941f-rootfs.mount: Deactivated successfully. Apr 16 23:58:06.937378 containerd[1631]: time="2026-04-16T23:58:06.937316026Z" level=info msg="CreateContainer within sandbox \"cad2566d1dabb86801d5361fa53d42c485141d0c1dedff3047a278f1fbf87ada\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 16 23:58:06.948046 containerd[1631]: time="2026-04-16T23:58:06.947693300Z" level=info msg="Container b93ba752df6327e34e8a0be6fd3855f5849e65fc8b216cb52a179cb4d3a74622: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:58:06.958022 containerd[1631]: time="2026-04-16T23:58:06.957104156Z" level=info msg="CreateContainer within sandbox \"cad2566d1dabb86801d5361fa53d42c485141d0c1dedff3047a278f1fbf87ada\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b93ba752df6327e34e8a0be6fd3855f5849e65fc8b216cb52a179cb4d3a74622\"" Apr 16 23:58:06.958542 containerd[1631]: time="2026-04-16T23:58:06.958502713Z" level=info msg="StartContainer for \"b93ba752df6327e34e8a0be6fd3855f5849e65fc8b216cb52a179cb4d3a74622\"" Apr 16 23:58:06.959081 containerd[1631]: time="2026-04-16T23:58:06.959046191Z" level=info msg="connecting to shim b93ba752df6327e34e8a0be6fd3855f5849e65fc8b216cb52a179cb4d3a74622" address="unix:///run/containerd/s/c5ab294174418f4162721ff8c234249b739fd2a20ffce1ce746b058777c0d8cf" protocol=ttrpc version=3 Apr 16 23:58:06.985254 systemd[1]: Started cri-containerd-b93ba752df6327e34e8a0be6fd3855f5849e65fc8b216cb52a179cb4d3a74622.scope - libcontainer container b93ba752df6327e34e8a0be6fd3855f5849e65fc8b216cb52a179cb4d3a74622. Apr 16 23:58:07.009434 systemd[1]: cri-containerd-b93ba752df6327e34e8a0be6fd3855f5849e65fc8b216cb52a179cb4d3a74622.scope: Deactivated successfully. Apr 16 23:58:07.010751 containerd[1631]: time="2026-04-16T23:58:07.010715972Z" level=info msg="received container exit event container_id:\"b93ba752df6327e34e8a0be6fd3855f5849e65fc8b216cb52a179cb4d3a74622\" id:\"b93ba752df6327e34e8a0be6fd3855f5849e65fc8b216cb52a179cb4d3a74622\" pid:3433 exited_at:{seconds:1776383887 nanos:10251053}" Apr 16 23:58:07.012878 containerd[1631]: time="2026-04-16T23:58:07.012850647Z" level=info msg="StartContainer for \"b93ba752df6327e34e8a0be6fd3855f5849e65fc8b216cb52a179cb4d3a74622\" returns successfully" Apr 16 23:58:07.031939 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b93ba752df6327e34e8a0be6fd3855f5849e65fc8b216cb52a179cb4d3a74622-rootfs.mount: Deactivated successfully. Apr 16 23:58:07.958012 containerd[1631]: time="2026-04-16T23:58:07.956419227Z" level=info msg="CreateContainer within sandbox \"cad2566d1dabb86801d5361fa53d42c485141d0c1dedff3047a278f1fbf87ada\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 16 23:58:07.977325 containerd[1631]: time="2026-04-16T23:58:07.977129047Z" level=info msg="Container 33e0483b61c401334bef304f8ddb6c8858837c8229104b93a9810224854d9710: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:58:07.980093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1573262694.mount: Deactivated successfully. Apr 16 23:58:07.988179 containerd[1631]: time="2026-04-16T23:58:07.988147341Z" level=info msg="CreateContainer within sandbox \"cad2566d1dabb86801d5361fa53d42c485141d0c1dedff3047a278f1fbf87ada\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"33e0483b61c401334bef304f8ddb6c8858837c8229104b93a9810224854d9710\"" Apr 16 23:58:07.989141 containerd[1631]: time="2026-04-16T23:58:07.988898349Z" level=info msg="StartContainer for \"33e0483b61c401334bef304f8ddb6c8858837c8229104b93a9810224854d9710\"" Apr 16 23:58:07.989827 containerd[1631]: time="2026-04-16T23:58:07.989779067Z" level=info msg="connecting to shim 33e0483b61c401334bef304f8ddb6c8858837c8229104b93a9810224854d9710" address="unix:///run/containerd/s/c5ab294174418f4162721ff8c234249b739fd2a20ffce1ce746b058777c0d8cf" protocol=ttrpc version=3 Apr 16 23:58:08.007243 systemd[1]: Started cri-containerd-33e0483b61c401334bef304f8ddb6c8858837c8229104b93a9810224854d9710.scope - libcontainer container 33e0483b61c401334bef304f8ddb6c8858837c8229104b93a9810224854d9710. Apr 16 23:58:08.041468 containerd[1631]: time="2026-04-16T23:58:08.041409427Z" level=info msg="StartContainer for \"33e0483b61c401334bef304f8ddb6c8858837c8229104b93a9810224854d9710\" returns successfully" Apr 16 23:58:08.152752 kubelet[2810]: I0416 23:58:08.152725 2810 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 16 23:58:08.188234 systemd[1]: Created slice kubepods-burstable-pod5c2a7892_e2bb_4391_b6bf_701f93cb9c2d.slice - libcontainer container kubepods-burstable-pod5c2a7892_e2bb_4391_b6bf_701f93cb9c2d.slice. Apr 16 23:58:08.194944 systemd[1]: Created slice kubepods-burstable-podc0544bea_465f_4853_b98c_9af465b0d83f.slice - libcontainer container kubepods-burstable-podc0544bea_465f_4853_b98c_9af465b0d83f.slice. Apr 16 23:58:08.277161 kubelet[2810]: I0416 23:58:08.277061 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0544bea-465f-4853-b98c-9af465b0d83f-config-volume\") pod \"coredns-674b8bbfcf-6x46z\" (UID: \"c0544bea-465f-4853-b98c-9af465b0d83f\") " pod="kube-system/coredns-674b8bbfcf-6x46z" Apr 16 23:58:08.277331 kubelet[2810]: I0416 23:58:08.277319 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4k2m\" (UniqueName: \"kubernetes.io/projected/c0544bea-465f-4853-b98c-9af465b0d83f-kube-api-access-q4k2m\") pod \"coredns-674b8bbfcf-6x46z\" (UID: \"c0544bea-465f-4853-b98c-9af465b0d83f\") " pod="kube-system/coredns-674b8bbfcf-6x46z" Apr 16 23:58:08.277728 kubelet[2810]: I0416 23:58:08.277413 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn4sj\" (UniqueName: \"kubernetes.io/projected/5c2a7892-e2bb-4391-b6bf-701f93cb9c2d-kube-api-access-kn4sj\") pod \"coredns-674b8bbfcf-f69k5\" (UID: \"5c2a7892-e2bb-4391-b6bf-701f93cb9c2d\") " pod="kube-system/coredns-674b8bbfcf-f69k5" Apr 16 23:58:08.277728 kubelet[2810]: I0416 23:58:08.277427 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c2a7892-e2bb-4391-b6bf-701f93cb9c2d-config-volume\") pod \"coredns-674b8bbfcf-f69k5\" (UID: \"5c2a7892-e2bb-4391-b6bf-701f93cb9c2d\") " pod="kube-system/coredns-674b8bbfcf-f69k5" Apr 16 23:58:08.493327 containerd[1631]: time="2026-04-16T23:58:08.492973236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-f69k5,Uid:5c2a7892-e2bb-4391-b6bf-701f93cb9c2d,Namespace:kube-system,Attempt:0,}" Apr 16 23:58:08.498936 containerd[1631]: time="2026-04-16T23:58:08.498892923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6x46z,Uid:c0544bea-465f-4853-b98c-9af465b0d83f,Namespace:kube-system,Attempt:0,}" Apr 16 23:58:10.117571 systemd-networkd[1489]: cilium_host: Link UP Apr 16 23:58:10.118006 systemd-networkd[1489]: cilium_net: Link UP Apr 16 23:58:10.122842 systemd-networkd[1489]: cilium_host: Gained carrier Apr 16 23:58:10.123026 systemd-networkd[1489]: cilium_net: Gained carrier Apr 16 23:58:10.213904 systemd-networkd[1489]: cilium_vxlan: Link UP Apr 16 23:58:10.213915 systemd-networkd[1489]: cilium_vxlan: Gained carrier Apr 16 23:58:10.379379 kernel: NET: Registered PF_ALG protocol family Apr 16 23:58:10.678927 systemd-networkd[1489]: cilium_net: Gained IPv6LL Apr 16 23:58:10.879645 systemd-networkd[1489]: lxc_health: Link UP Apr 16 23:58:10.879859 systemd-networkd[1489]: lxc_health: Gained carrier Apr 16 23:58:10.933305 systemd-networkd[1489]: cilium_host: Gained IPv6LL Apr 16 23:58:11.037155 kernel: eth0: renamed from tmpabc3a Apr 16 23:58:11.041825 systemd-networkd[1489]: lxcd1358e996e12: Link UP Apr 16 23:58:11.042475 systemd-networkd[1489]: lxcc13e73ef9b1b: Link UP Apr 16 23:58:11.042835 systemd-networkd[1489]: lxcd1358e996e12: Gained carrier Apr 16 23:58:11.054186 kernel: eth0: renamed from tmp0012f Apr 16 23:58:11.057072 systemd-networkd[1489]: lxcc13e73ef9b1b: Gained carrier Apr 16 23:58:11.253274 systemd-networkd[1489]: cilium_vxlan: Gained IPv6LL Apr 16 23:58:11.583429 kubelet[2810]: I0416 23:58:11.583316 2810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qczzs" podStartSLOduration=9.427664644 podStartE2EDuration="17.583303287s" podCreationTimestamp="2026-04-16 23:57:54 +0000 UTC" firstStartedPulling="2026-04-16 23:57:55.670956769 +0000 UTC m=+7.913331384" lastFinishedPulling="2026-04-16 23:58:03.826595412 +0000 UTC m=+16.068970027" observedRunningTime="2026-04-16 23:58:08.978024538 +0000 UTC m=+21.220399183" watchObservedRunningTime="2026-04-16 23:58:11.583303287 +0000 UTC m=+23.825677902" Apr 16 23:58:12.661510 systemd-networkd[1489]: lxcd1358e996e12: Gained IPv6LL Apr 16 23:58:12.917337 systemd-networkd[1489]: lxc_health: Gained IPv6LL Apr 16 23:58:13.110453 systemd-networkd[1489]: lxcc13e73ef9b1b: Gained IPv6LL Apr 16 23:58:13.446735 containerd[1631]: time="2026-04-16T23:58:13.446692572Z" level=info msg="connecting to shim abc3ac1ebd7477b5fa3a8f8e5ff110b209255a6d885db3f8925734cda6e4c868" address="unix:///run/containerd/s/84617ce1b11c3f8ac373826476619ec715c995a6c53ca0b1efee42e7c1e971e6" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:58:13.482233 systemd[1]: Started cri-containerd-abc3ac1ebd7477b5fa3a8f8e5ff110b209255a6d885db3f8925734cda6e4c868.scope - libcontainer container abc3ac1ebd7477b5fa3a8f8e5ff110b209255a6d885db3f8925734cda6e4c868. Apr 16 23:58:13.484728 containerd[1631]: time="2026-04-16T23:58:13.484694002Z" level=info msg="connecting to shim 0012f03a55477bbceafad245bbf4104d4ca633ba82ef81a1981ea73e6275d2a6" address="unix:///run/containerd/s/c5acc6d44ab5d8b7cdf3b7f405d15432feebb6b8314d6d9cfded203cd379a5e5" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:58:13.522209 systemd[1]: Started cri-containerd-0012f03a55477bbceafad245bbf4104d4ca633ba82ef81a1981ea73e6275d2a6.scope - libcontainer container 0012f03a55477bbceafad245bbf4104d4ca633ba82ef81a1981ea73e6275d2a6. Apr 16 23:58:13.558718 containerd[1631]: time="2026-04-16T23:58:13.558564805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6x46z,Uid:c0544bea-465f-4853-b98c-9af465b0d83f,Namespace:kube-system,Attempt:0,} returns sandbox id \"abc3ac1ebd7477b5fa3a8f8e5ff110b209255a6d885db3f8925734cda6e4c868\"" Apr 16 23:58:13.564056 containerd[1631]: time="2026-04-16T23:58:13.563652116Z" level=info msg="CreateContainer within sandbox \"abc3ac1ebd7477b5fa3a8f8e5ff110b209255a6d885db3f8925734cda6e4c868\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 16 23:58:13.571239 containerd[1631]: time="2026-04-16T23:58:13.571221872Z" level=info msg="Container 488c3064c7e13135359861720347ac54605a8a52d5b317b5e68af24debdcce90: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:58:13.577064 containerd[1631]: time="2026-04-16T23:58:13.577046051Z" level=info msg="CreateContainer within sandbox \"abc3ac1ebd7477b5fa3a8f8e5ff110b209255a6d885db3f8925734cda6e4c868\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"488c3064c7e13135359861720347ac54605a8a52d5b317b5e68af24debdcce90\"" Apr 16 23:58:13.579372 containerd[1631]: time="2026-04-16T23:58:13.579317317Z" level=info msg="StartContainer for \"488c3064c7e13135359861720347ac54605a8a52d5b317b5e68af24debdcce90\"" Apr 16 23:58:13.581162 containerd[1631]: time="2026-04-16T23:58:13.581099413Z" level=info msg="connecting to shim 488c3064c7e13135359861720347ac54605a8a52d5b317b5e68af24debdcce90" address="unix:///run/containerd/s/84617ce1b11c3f8ac373826476619ec715c995a6c53ca0b1efee42e7c1e971e6" protocol=ttrpc version=3 Apr 16 23:58:13.591854 containerd[1631]: time="2026-04-16T23:58:13.591832414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-f69k5,Uid:5c2a7892-e2bb-4391-b6bf-701f93cb9c2d,Namespace:kube-system,Attempt:0,} returns sandbox id \"0012f03a55477bbceafad245bbf4104d4ca633ba82ef81a1981ea73e6275d2a6\"" Apr 16 23:58:13.596740 containerd[1631]: time="2026-04-16T23:58:13.596662545Z" level=info msg="CreateContainer within sandbox \"0012f03a55477bbceafad245bbf4104d4ca633ba82ef81a1981ea73e6275d2a6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 16 23:58:13.603306 systemd[1]: Started cri-containerd-488c3064c7e13135359861720347ac54605a8a52d5b317b5e68af24debdcce90.scope - libcontainer container 488c3064c7e13135359861720347ac54605a8a52d5b317b5e68af24debdcce90. Apr 16 23:58:13.604129 containerd[1631]: time="2026-04-16T23:58:13.604085531Z" level=info msg="Container e4fceda7e6b008664baa2572d6f921edf419358c91725bff8bb65caa6ca781fe: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:58:13.623276 containerd[1631]: time="2026-04-16T23:58:13.623238716Z" level=info msg="CreateContainer within sandbox \"0012f03a55477bbceafad245bbf4104d4ca633ba82ef81a1981ea73e6275d2a6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e4fceda7e6b008664baa2572d6f921edf419358c91725bff8bb65caa6ca781fe\"" Apr 16 23:58:13.624325 containerd[1631]: time="2026-04-16T23:58:13.624286124Z" level=info msg="StartContainer for \"e4fceda7e6b008664baa2572d6f921edf419358c91725bff8bb65caa6ca781fe\"" Apr 16 23:58:13.624898 containerd[1631]: time="2026-04-16T23:58:13.624871082Z" level=info msg="connecting to shim e4fceda7e6b008664baa2572d6f921edf419358c91725bff8bb65caa6ca781fe" address="unix:///run/containerd/s/c5acc6d44ab5d8b7cdf3b7f405d15432feebb6b8314d6d9cfded203cd379a5e5" protocol=ttrpc version=3 Apr 16 23:58:13.640946 containerd[1631]: time="2026-04-16T23:58:13.640920843Z" level=info msg="StartContainer for \"488c3064c7e13135359861720347ac54605a8a52d5b317b5e68af24debdcce90\" returns successfully" Apr 16 23:58:13.645253 systemd[1]: Started cri-containerd-e4fceda7e6b008664baa2572d6f921edf419358c91725bff8bb65caa6ca781fe.scope - libcontainer container e4fceda7e6b008664baa2572d6f921edf419358c91725bff8bb65caa6ca781fe. Apr 16 23:58:13.679567 containerd[1631]: time="2026-04-16T23:58:13.679446432Z" level=info msg="StartContainer for \"e4fceda7e6b008664baa2572d6f921edf419358c91725bff8bb65caa6ca781fe\" returns successfully" Apr 16 23:58:13.988820 kubelet[2810]: I0416 23:58:13.988726 2810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-f69k5" podStartSLOduration=19.98870867 podStartE2EDuration="19.98870867s" podCreationTimestamp="2026-04-16 23:57:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:58:13.983892159 +0000 UTC m=+26.226266804" watchObservedRunningTime="2026-04-16 23:58:13.98870867 +0000 UTC m=+26.231083315" Apr 16 23:58:14.025426 kubelet[2810]: I0416 23:58:14.025058 2810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-6x46z" podStartSLOduration=20.025045424 podStartE2EDuration="20.025045424s" podCreationTimestamp="2026-04-16 23:57:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:58:14.003002614 +0000 UTC m=+26.245377269" watchObservedRunningTime="2026-04-16 23:58:14.025045424 +0000 UTC m=+26.267420029" Apr 16 23:58:58.622944 systemd[1]: Started sshd@9-77.42.22.14:22-4.175.71.9:36888.service - OpenSSH per-connection server daemon (4.175.71.9:36888). Apr 16 23:58:58.832526 sshd[4147]: Accepted publickey for core from 4.175.71.9 port 36888 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:58:58.834755 sshd-session[4147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:58:58.842676 systemd-logind[1606]: New session 10 of user core. Apr 16 23:58:58.852298 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 16 23:58:59.016963 sshd[4150]: Connection closed by 4.175.71.9 port 36888 Apr 16 23:58:59.017602 sshd-session[4147]: pam_unix(sshd:session): session closed for user core Apr 16 23:58:59.021742 systemd-logind[1606]: Session 10 logged out. Waiting for processes to exit. Apr 16 23:58:59.022477 systemd[1]: sshd@9-77.42.22.14:22-4.175.71.9:36888.service: Deactivated successfully. Apr 16 23:58:59.024679 systemd[1]: session-10.scope: Deactivated successfully. Apr 16 23:58:59.025937 systemd-logind[1606]: Removed session 10. Apr 16 23:59:04.061569 systemd[1]: Started sshd@10-77.42.22.14:22-4.175.71.9:36900.service - OpenSSH per-connection server daemon (4.175.71.9:36900). Apr 16 23:59:04.279236 sshd[4163]: Accepted publickey for core from 4.175.71.9 port 36900 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:59:04.282477 sshd-session[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:59:04.290239 systemd-logind[1606]: New session 11 of user core. Apr 16 23:59:04.299357 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 16 23:59:04.470795 sshd[4166]: Connection closed by 4.175.71.9 port 36900 Apr 16 23:59:04.472430 sshd-session[4163]: pam_unix(sshd:session): session closed for user core Apr 16 23:59:04.479087 systemd[1]: sshd@10-77.42.22.14:22-4.175.71.9:36900.service: Deactivated successfully. Apr 16 23:59:04.482354 systemd[1]: session-11.scope: Deactivated successfully. Apr 16 23:59:04.484594 systemd-logind[1606]: Session 11 logged out. Waiting for processes to exit. Apr 16 23:59:04.487693 systemd-logind[1606]: Removed session 11. Apr 16 23:59:09.526840 systemd[1]: Started sshd@11-77.42.22.14:22-4.175.71.9:54280.service - OpenSSH per-connection server daemon (4.175.71.9:54280). Apr 16 23:59:09.745195 sshd[4179]: Accepted publickey for core from 4.175.71.9 port 54280 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:59:09.746966 sshd-session[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:59:09.755435 systemd-logind[1606]: New session 12 of user core. Apr 16 23:59:09.760353 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 16 23:59:09.933263 sshd[4182]: Connection closed by 4.175.71.9 port 54280 Apr 16 23:59:09.934523 sshd-session[4179]: pam_unix(sshd:session): session closed for user core Apr 16 23:59:09.941906 systemd[1]: sshd@11-77.42.22.14:22-4.175.71.9:54280.service: Deactivated successfully. Apr 16 23:59:09.945820 systemd[1]: session-12.scope: Deactivated successfully. Apr 16 23:59:09.948073 systemd-logind[1606]: Session 12 logged out. Waiting for processes to exit. Apr 16 23:59:09.951471 systemd-logind[1606]: Removed session 12. Apr 16 23:59:14.977794 systemd[1]: Started sshd@12-77.42.22.14:22-4.175.71.9:54286.service - OpenSSH per-connection server daemon (4.175.71.9:54286). Apr 16 23:59:15.192210 sshd[4195]: Accepted publickey for core from 4.175.71.9 port 54286 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:59:15.194550 sshd-session[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:59:15.204211 systemd-logind[1606]: New session 13 of user core. Apr 16 23:59:15.209356 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 16 23:59:15.371074 sshd[4198]: Connection closed by 4.175.71.9 port 54286 Apr 16 23:59:15.372461 sshd-session[4195]: pam_unix(sshd:session): session closed for user core Apr 16 23:59:15.377966 systemd[1]: sshd@12-77.42.22.14:22-4.175.71.9:54286.service: Deactivated successfully. Apr 16 23:59:15.383033 systemd[1]: session-13.scope: Deactivated successfully. Apr 16 23:59:15.384896 systemd-logind[1606]: Session 13 logged out. Waiting for processes to exit. Apr 16 23:59:15.388729 systemd-logind[1606]: Removed session 13. Apr 16 23:59:15.417417 systemd[1]: Started sshd@13-77.42.22.14:22-4.175.71.9:45814.service - OpenSSH per-connection server daemon (4.175.71.9:45814). Apr 16 23:59:15.607386 sshd[4211]: Accepted publickey for core from 4.175.71.9 port 45814 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:59:15.610066 sshd-session[4211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:59:15.619232 systemd-logind[1606]: New session 14 of user core. Apr 16 23:59:15.626359 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 16 23:59:15.838705 sshd[4214]: Connection closed by 4.175.71.9 port 45814 Apr 16 23:59:15.837619 sshd-session[4211]: pam_unix(sshd:session): session closed for user core Apr 16 23:59:15.840988 systemd[1]: sshd@13-77.42.22.14:22-4.175.71.9:45814.service: Deactivated successfully. Apr 16 23:59:15.842758 systemd[1]: session-14.scope: Deactivated successfully. Apr 16 23:59:15.843877 systemd-logind[1606]: Session 14 logged out. Waiting for processes to exit. Apr 16 23:59:15.845715 systemd-logind[1606]: Removed session 14. Apr 16 23:59:15.876788 systemd[1]: Started sshd@14-77.42.22.14:22-4.175.71.9:45830.service - OpenSSH per-connection server daemon (4.175.71.9:45830). Apr 16 23:59:16.065192 sshd[4225]: Accepted publickey for core from 4.175.71.9 port 45830 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:59:16.067927 sshd-session[4225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:59:16.077479 systemd-logind[1606]: New session 15 of user core. Apr 16 23:59:16.084395 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 16 23:59:16.267389 sshd[4228]: Connection closed by 4.175.71.9 port 45830 Apr 16 23:59:16.268920 sshd-session[4225]: pam_unix(sshd:session): session closed for user core Apr 16 23:59:16.274677 systemd[1]: sshd@14-77.42.22.14:22-4.175.71.9:45830.service: Deactivated successfully. Apr 16 23:59:16.276630 systemd[1]: session-15.scope: Deactivated successfully. Apr 16 23:59:16.277946 systemd-logind[1606]: Session 15 logged out. Waiting for processes to exit. Apr 16 23:59:16.279983 systemd-logind[1606]: Removed session 15. Apr 16 23:59:21.313078 systemd[1]: Started sshd@15-77.42.22.14:22-4.175.71.9:45842.service - OpenSSH per-connection server daemon (4.175.71.9:45842). Apr 16 23:59:21.528581 sshd[4239]: Accepted publickey for core from 4.175.71.9 port 45842 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:59:21.531949 sshd-session[4239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:59:21.540921 systemd-logind[1606]: New session 16 of user core. Apr 16 23:59:21.547348 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 16 23:59:21.708722 sshd[4242]: Connection closed by 4.175.71.9 port 45842 Apr 16 23:59:21.710419 sshd-session[4239]: pam_unix(sshd:session): session closed for user core Apr 16 23:59:21.715485 systemd[1]: sshd@15-77.42.22.14:22-4.175.71.9:45842.service: Deactivated successfully. Apr 16 23:59:21.718699 systemd[1]: session-16.scope: Deactivated successfully. Apr 16 23:59:21.720069 systemd-logind[1606]: Session 16 logged out. Waiting for processes to exit. Apr 16 23:59:21.723002 systemd-logind[1606]: Removed session 16. Apr 16 23:59:21.751379 systemd[1]: Started sshd@16-77.42.22.14:22-4.175.71.9:45844.service - OpenSSH per-connection server daemon (4.175.71.9:45844). Apr 16 23:59:21.947215 sshd[4254]: Accepted publickey for core from 4.175.71.9 port 45844 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:59:21.949707 sshd-session[4254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:59:21.958913 systemd-logind[1606]: New session 17 of user core. Apr 16 23:59:21.964516 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 16 23:59:22.152002 sshd[4257]: Connection closed by 4.175.71.9 port 45844 Apr 16 23:59:22.152988 sshd-session[4254]: pam_unix(sshd:session): session closed for user core Apr 16 23:59:22.160294 systemd[1]: sshd@16-77.42.22.14:22-4.175.71.9:45844.service: Deactivated successfully. Apr 16 23:59:22.164089 systemd[1]: session-17.scope: Deactivated successfully. Apr 16 23:59:22.166061 systemd-logind[1606]: Session 17 logged out. Waiting for processes to exit. Apr 16 23:59:22.168279 systemd-logind[1606]: Removed session 17. Apr 16 23:59:22.195081 systemd[1]: Started sshd@17-77.42.22.14:22-4.175.71.9:45860.service - OpenSSH per-connection server daemon (4.175.71.9:45860). Apr 16 23:59:22.383373 sshd[4267]: Accepted publickey for core from 4.175.71.9 port 45860 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:59:22.385816 sshd-session[4267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:59:22.394694 systemd-logind[1606]: New session 18 of user core. Apr 16 23:59:22.400243 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 16 23:59:23.109555 sshd[4270]: Connection closed by 4.175.71.9 port 45860 Apr 16 23:59:23.110306 sshd-session[4267]: pam_unix(sshd:session): session closed for user core Apr 16 23:59:23.114406 systemd-logind[1606]: Session 18 logged out. Waiting for processes to exit. Apr 16 23:59:23.114716 systemd[1]: sshd@17-77.42.22.14:22-4.175.71.9:45860.service: Deactivated successfully. Apr 16 23:59:23.116474 systemd[1]: session-18.scope: Deactivated successfully. Apr 16 23:59:23.118169 systemd-logind[1606]: Removed session 18. Apr 16 23:59:23.154548 systemd[1]: Started sshd@18-77.42.22.14:22-4.175.71.9:45868.service - OpenSSH per-connection server daemon (4.175.71.9:45868). Apr 16 23:59:23.341309 sshd[4287]: Accepted publickey for core from 4.175.71.9 port 45868 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:59:23.344031 sshd-session[4287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:59:23.353415 systemd-logind[1606]: New session 19 of user core. Apr 16 23:59:23.359357 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 16 23:59:23.623016 sshd[4290]: Connection closed by 4.175.71.9 port 45868 Apr 16 23:59:23.624887 sshd-session[4287]: pam_unix(sshd:session): session closed for user core Apr 16 23:59:23.628163 systemd-logind[1606]: Session 19 logged out. Waiting for processes to exit. Apr 16 23:59:23.628719 systemd[1]: sshd@18-77.42.22.14:22-4.175.71.9:45868.service: Deactivated successfully. Apr 16 23:59:23.630577 systemd[1]: session-19.scope: Deactivated successfully. Apr 16 23:59:23.631973 systemd-logind[1606]: Removed session 19. Apr 16 23:59:23.662276 systemd[1]: Started sshd@19-77.42.22.14:22-4.175.71.9:45884.service - OpenSSH per-connection server daemon (4.175.71.9:45884). Apr 16 23:59:23.864184 sshd[4300]: Accepted publickey for core from 4.175.71.9 port 45884 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:59:23.866508 sshd-session[4300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:59:23.875663 systemd-logind[1606]: New session 20 of user core. Apr 16 23:59:23.882376 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 16 23:59:24.039050 sshd[4303]: Connection closed by 4.175.71.9 port 45884 Apr 16 23:59:24.039598 sshd-session[4300]: pam_unix(sshd:session): session closed for user core Apr 16 23:59:24.043954 systemd[1]: sshd@19-77.42.22.14:22-4.175.71.9:45884.service: Deactivated successfully. Apr 16 23:59:24.045948 systemd[1]: session-20.scope: Deactivated successfully. Apr 16 23:59:24.046979 systemd-logind[1606]: Session 20 logged out. Waiting for processes to exit. Apr 16 23:59:24.048978 systemd-logind[1606]: Removed session 20. Apr 16 23:59:29.086993 systemd[1]: Started sshd@20-77.42.22.14:22-4.175.71.9:50742.service - OpenSSH per-connection server daemon (4.175.71.9:50742). Apr 16 23:59:29.302009 sshd[4319]: Accepted publickey for core from 4.175.71.9 port 50742 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:59:29.304494 sshd-session[4319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:59:29.311657 systemd-logind[1606]: New session 21 of user core. Apr 16 23:59:29.321323 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 16 23:59:29.479076 sshd[4322]: Connection closed by 4.175.71.9 port 50742 Apr 16 23:59:29.481446 sshd-session[4319]: pam_unix(sshd:session): session closed for user core Apr 16 23:59:29.486260 systemd-logind[1606]: Session 21 logged out. Waiting for processes to exit. Apr 16 23:59:29.486796 systemd[1]: sshd@20-77.42.22.14:22-4.175.71.9:50742.service: Deactivated successfully. Apr 16 23:59:29.489593 systemd[1]: session-21.scope: Deactivated successfully. Apr 16 23:59:29.491655 systemd-logind[1606]: Removed session 21. Apr 16 23:59:34.526521 systemd[1]: Started sshd@21-77.42.22.14:22-4.175.71.9:50756.service - OpenSSH per-connection server daemon (4.175.71.9:50756). Apr 16 23:59:34.739183 sshd[4334]: Accepted publickey for core from 4.175.71.9 port 50756 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:59:34.741207 sshd-session[4334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:59:34.749829 systemd-logind[1606]: New session 22 of user core. Apr 16 23:59:34.757347 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 16 23:59:34.929089 sshd[4337]: Connection closed by 4.175.71.9 port 50756 Apr 16 23:59:34.930397 sshd-session[4334]: pam_unix(sshd:session): session closed for user core Apr 16 23:59:34.936063 systemd[1]: sshd@21-77.42.22.14:22-4.175.71.9:50756.service: Deactivated successfully. Apr 16 23:59:34.940292 systemd[1]: session-22.scope: Deactivated successfully. Apr 16 23:59:34.943473 systemd-logind[1606]: Session 22 logged out. Waiting for processes to exit. Apr 16 23:59:34.946530 systemd-logind[1606]: Removed session 22. Apr 16 23:59:34.970843 systemd[1]: Started sshd@22-77.42.22.14:22-4.175.71.9:50760.service - OpenSSH per-connection server daemon (4.175.71.9:50760). Apr 16 23:59:35.171984 sshd[4349]: Accepted publickey for core from 4.175.71.9 port 50760 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:59:35.174574 sshd-session[4349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:59:35.183282 systemd-logind[1606]: New session 23 of user core. Apr 16 23:59:35.191340 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 16 23:59:36.764470 containerd[1631]: time="2026-04-16T23:59:36.764195292Z" level=info msg="StopContainer for \"fb40f21b6fc0d21348a675b407f69b743f67bf28fd1a67be8d070ae4d1f1fdde\" with timeout 30 (s)" Apr 16 23:59:36.765032 containerd[1631]: time="2026-04-16T23:59:36.764948438Z" level=info msg="Stop container \"fb40f21b6fc0d21348a675b407f69b743f67bf28fd1a67be8d070ae4d1f1fdde\" with signal terminated" Apr 16 23:59:36.774711 containerd[1631]: time="2026-04-16T23:59:36.774677610Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 16 23:59:36.781445 containerd[1631]: time="2026-04-16T23:59:36.781357811Z" level=info msg="StopContainer for \"33e0483b61c401334bef304f8ddb6c8858837c8229104b93a9810224854d9710\" with timeout 2 (s)" Apr 16 23:59:36.781655 containerd[1631]: time="2026-04-16T23:59:36.781607909Z" level=info msg="Stop container \"33e0483b61c401334bef304f8ddb6c8858837c8229104b93a9810224854d9710\" with signal terminated" Apr 16 23:59:36.782498 systemd[1]: cri-containerd-fb40f21b6fc0d21348a675b407f69b743f67bf28fd1a67be8d070ae4d1f1fdde.scope: Deactivated successfully. Apr 16 23:59:36.785707 containerd[1631]: time="2026-04-16T23:59:36.785637595Z" level=info msg="received container exit event container_id:\"fb40f21b6fc0d21348a675b407f69b743f67bf28fd1a67be8d070ae4d1f1fdde\" id:\"fb40f21b6fc0d21348a675b407f69b743f67bf28fd1a67be8d070ae4d1f1fdde\" pid:3215 exited_at:{seconds:1776383976 nanos:785403767}" Apr 16 23:59:36.791229 systemd-networkd[1489]: lxc_health: Link DOWN Apr 16 23:59:36.791237 systemd-networkd[1489]: lxc_health: Lost carrier Apr 16 23:59:36.811386 systemd[1]: cri-containerd-33e0483b61c401334bef304f8ddb6c8858837c8229104b93a9810224854d9710.scope: Deactivated successfully. Apr 16 23:59:36.811658 systemd[1]: cri-containerd-33e0483b61c401334bef304f8ddb6c8858837c8229104b93a9810224854d9710.scope: Consumed 4.727s CPU time, 128M memory peak, 112K read from disk, 13.6M written to disk. Apr 16 23:59:36.813705 containerd[1631]: time="2026-04-16T23:59:36.813670420Z" level=info msg="received container exit event container_id:\"33e0483b61c401334bef304f8ddb6c8858837c8229104b93a9810224854d9710\" id:\"33e0483b61c401334bef304f8ddb6c8858837c8229104b93a9810224854d9710\" pid:3470 exited_at:{seconds:1776383976 nanos:813231952}" Apr 16 23:59:36.824759 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb40f21b6fc0d21348a675b407f69b743f67bf28fd1a67be8d070ae4d1f1fdde-rootfs.mount: Deactivated successfully. Apr 16 23:59:36.836421 containerd[1631]: time="2026-04-16T23:59:36.836284916Z" level=info msg="StopContainer for \"fb40f21b6fc0d21348a675b407f69b743f67bf28fd1a67be8d070ae4d1f1fdde\" returns successfully" Apr 16 23:59:36.837145 containerd[1631]: time="2026-04-16T23:59:36.837079091Z" level=info msg="StopPodSandbox for \"71a040ad5a56a6f893437f67b5ec47090ddcf52a435e388c7a47962efa3753af\"" Apr 16 23:59:36.837199 containerd[1631]: time="2026-04-16T23:59:36.837160701Z" level=info msg="Container to stop \"fb40f21b6fc0d21348a675b407f69b743f67bf28fd1a67be8d070ae4d1f1fdde\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 16 23:59:36.843987 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33e0483b61c401334bef304f8ddb6c8858837c8229104b93a9810224854d9710-rootfs.mount: Deactivated successfully. Apr 16 23:59:36.847511 systemd[1]: cri-containerd-71a040ad5a56a6f893437f67b5ec47090ddcf52a435e388c7a47962efa3753af.scope: Deactivated successfully. Apr 16 23:59:36.850188 containerd[1631]: time="2026-04-16T23:59:36.850078884Z" level=info msg="received sandbox exit event container_id:\"71a040ad5a56a6f893437f67b5ec47090ddcf52a435e388c7a47962efa3753af\" id:\"71a040ad5a56a6f893437f67b5ec47090ddcf52a435e388c7a47962efa3753af\" exit_status:137 exited_at:{seconds:1776383976 nanos:848951851}" monitor_name=podsandbox Apr 16 23:59:36.852995 containerd[1631]: time="2026-04-16T23:59:36.852924277Z" level=info msg="StopContainer for \"33e0483b61c401334bef304f8ddb6c8858837c8229104b93a9810224854d9710\" returns successfully" Apr 16 23:59:36.854318 containerd[1631]: time="2026-04-16T23:59:36.854304439Z" level=info msg="StopPodSandbox for \"cad2566d1dabb86801d5361fa53d42c485141d0c1dedff3047a278f1fbf87ada\"" Apr 16 23:59:36.854632 containerd[1631]: time="2026-04-16T23:59:36.854621387Z" level=info msg="Container to stop \"f5283733a8b7edef763d6fa51a9eaee7be0f6de0c67c4eca4e632eafa0c0e59a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 16 23:59:36.854681 containerd[1631]: time="2026-04-16T23:59:36.854669757Z" level=info msg="Container to stop \"b93ba752df6327e34e8a0be6fd3855f5849e65fc8b216cb52a179cb4d3a74622\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 16 23:59:36.854727 containerd[1631]: time="2026-04-16T23:59:36.854719937Z" level=info msg="Container to stop \"95812d222104ab8a15c703c00f997b2cebc7852d49c1582cb2a01f147070aa8e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 16 23:59:36.854880 containerd[1631]: time="2026-04-16T23:59:36.854780316Z" level=info msg="Container to stop \"8b08c3198347dcf3b3aca9b30baeb809599844b8b7f4b663ca814978b051941f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 16 23:59:36.854972 containerd[1631]: time="2026-04-16T23:59:36.854921365Z" level=info msg="Container to stop \"33e0483b61c401334bef304f8ddb6c8858837c8229104b93a9810224854d9710\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 16 23:59:36.868772 systemd[1]: cri-containerd-cad2566d1dabb86801d5361fa53d42c485141d0c1dedff3047a278f1fbf87ada.scope: Deactivated successfully. Apr 16 23:59:36.870695 containerd[1631]: time="2026-04-16T23:59:36.870658142Z" level=info msg="received sandbox exit event container_id:\"cad2566d1dabb86801d5361fa53d42c485141d0c1dedff3047a278f1fbf87ada\" id:\"cad2566d1dabb86801d5361fa53d42c485141d0c1dedff3047a278f1fbf87ada\" exit_status:137 exited_at:{seconds:1776383976 nanos:870233785}" monitor_name=podsandbox Apr 16 23:59:36.885701 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71a040ad5a56a6f893437f67b5ec47090ddcf52a435e388c7a47962efa3753af-rootfs.mount: Deactivated successfully. Apr 16 23:59:36.889308 containerd[1631]: time="2026-04-16T23:59:36.888793205Z" level=info msg="shim disconnected" id=71a040ad5a56a6f893437f67b5ec47090ddcf52a435e388c7a47962efa3753af namespace=k8s.io Apr 16 23:59:36.889308 containerd[1631]: time="2026-04-16T23:59:36.889307602Z" level=warning msg="cleaning up after shim disconnected" id=71a040ad5a56a6f893437f67b5ec47090ddcf52a435e388c7a47962efa3753af namespace=k8s.io Apr 16 23:59:36.889432 containerd[1631]: time="2026-04-16T23:59:36.889313952Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 23:59:36.897127 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cad2566d1dabb86801d5361fa53d42c485141d0c1dedff3047a278f1fbf87ada-rootfs.mount: Deactivated successfully. Apr 16 23:59:36.902202 containerd[1631]: time="2026-04-16T23:59:36.902179466Z" level=info msg="shim disconnected" id=cad2566d1dabb86801d5361fa53d42c485141d0c1dedff3047a278f1fbf87ada namespace=k8s.io Apr 16 23:59:36.902344 containerd[1631]: time="2026-04-16T23:59:36.902333625Z" level=warning msg="cleaning up after shim disconnected" id=cad2566d1dabb86801d5361fa53d42c485141d0c1dedff3047a278f1fbf87ada namespace=k8s.io Apr 16 23:59:36.902397 containerd[1631]: time="2026-04-16T23:59:36.902376345Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 23:59:36.909684 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-71a040ad5a56a6f893437f67b5ec47090ddcf52a435e388c7a47962efa3753af-shm.mount: Deactivated successfully. Apr 16 23:59:36.910439 containerd[1631]: time="2026-04-16T23:59:36.910013529Z" level=info msg="TearDown network for sandbox \"71a040ad5a56a6f893437f67b5ec47090ddcf52a435e388c7a47962efa3753af\" successfully" Apr 16 23:59:36.910439 containerd[1631]: time="2026-04-16T23:59:36.910027819Z" level=info msg="StopPodSandbox for \"71a040ad5a56a6f893437f67b5ec47090ddcf52a435e388c7a47962efa3753af\" returns successfully" Apr 16 23:59:36.910790 containerd[1631]: time="2026-04-16T23:59:36.910625566Z" level=info msg="received sandbox container exit event sandbox_id:\"71a040ad5a56a6f893437f67b5ec47090ddcf52a435e388c7a47962efa3753af\" exit_status:137 exited_at:{seconds:1776383976 nanos:848951851}" monitor_name=criService Apr 16 23:59:36.918868 containerd[1631]: time="2026-04-16T23:59:36.918844477Z" level=info msg="received sandbox container exit event sandbox_id:\"cad2566d1dabb86801d5361fa53d42c485141d0c1dedff3047a278f1fbf87ada\" exit_status:137 exited_at:{seconds:1776383976 nanos:870233785}" monitor_name=criService Apr 16 23:59:36.921017 containerd[1631]: time="2026-04-16T23:59:36.920757196Z" level=info msg="TearDown network for sandbox \"cad2566d1dabb86801d5361fa53d42c485141d0c1dedff3047a278f1fbf87ada\" successfully" Apr 16 23:59:36.921017 containerd[1631]: time="2026-04-16T23:59:36.920790636Z" level=info msg="StopPodSandbox for \"cad2566d1dabb86801d5361fa53d42c485141d0c1dedff3047a278f1fbf87ada\" returns successfully" Apr 16 23:59:36.981730 kubelet[2810]: I0416 23:59:36.981695 2810 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4kt9\" (UniqueName: \"kubernetes.io/projected/94324002-ccbd-4870-8023-0b10f455c0e6-kube-api-access-w4kt9\") pod \"94324002-ccbd-4870-8023-0b10f455c0e6\" (UID: \"94324002-ccbd-4870-8023-0b10f455c0e6\") " Apr 16 23:59:36.982393 kubelet[2810]: I0416 23:59:36.981754 2810 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/94324002-ccbd-4870-8023-0b10f455c0e6-cilium-config-path\") pod \"94324002-ccbd-4870-8023-0b10f455c0e6\" (UID: \"94324002-ccbd-4870-8023-0b10f455c0e6\") " Apr 16 23:59:36.984009 kubelet[2810]: I0416 23:59:36.983980 2810 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94324002-ccbd-4870-8023-0b10f455c0e6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "94324002-ccbd-4870-8023-0b10f455c0e6" (UID: "94324002-ccbd-4870-8023-0b10f455c0e6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 23:59:36.984661 kubelet[2810]: I0416 23:59:36.984636 2810 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94324002-ccbd-4870-8023-0b10f455c0e6-kube-api-access-w4kt9" (OuterVolumeSpecName: "kube-api-access-w4kt9") pod "94324002-ccbd-4870-8023-0b10f455c0e6" (UID: "94324002-ccbd-4870-8023-0b10f455c0e6"). InnerVolumeSpecName "kube-api-access-w4kt9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 23:59:37.082237 kubelet[2810]: I0416 23:59:37.082037 2810 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6665b3da-59b1-4822-b674-5cad17d5fa80-xtables-lock\") pod \"6665b3da-59b1-4822-b674-5cad17d5fa80\" (UID: \"6665b3da-59b1-4822-b674-5cad17d5fa80\") " Apr 16 23:59:37.082237 kubelet[2810]: I0416 23:59:37.082099 2810 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6665b3da-59b1-4822-b674-5cad17d5fa80-cni-path\") pod \"6665b3da-59b1-4822-b674-5cad17d5fa80\" (UID: \"6665b3da-59b1-4822-b674-5cad17d5fa80\") " Apr 16 23:59:37.082237 kubelet[2810]: I0416 23:59:37.082150 2810 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6665b3da-59b1-4822-b674-5cad17d5fa80-cilium-run\") pod \"6665b3da-59b1-4822-b674-5cad17d5fa80\" (UID: \"6665b3da-59b1-4822-b674-5cad17d5fa80\") " Apr 16 23:59:37.082237 kubelet[2810]: I0416 23:59:37.082183 2810 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6665b3da-59b1-4822-b674-5cad17d5fa80-bpf-maps\") pod \"6665b3da-59b1-4822-b674-5cad17d5fa80\" (UID: \"6665b3da-59b1-4822-b674-5cad17d5fa80\") " Apr 16 23:59:37.082237 kubelet[2810]: I0416 23:59:37.082215 2810 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6665b3da-59b1-4822-b674-5cad17d5fa80-host-proc-sys-kernel\") pod \"6665b3da-59b1-4822-b674-5cad17d5fa80\" (UID: \"6665b3da-59b1-4822-b674-5cad17d5fa80\") " Apr 16 23:59:37.082237 kubelet[2810]: I0416 23:59:37.082246 2810 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6665b3da-59b1-4822-b674-5cad17d5fa80-host-proc-sys-net\") pod \"6665b3da-59b1-4822-b674-5cad17d5fa80\" (UID: \"6665b3da-59b1-4822-b674-5cad17d5fa80\") " Apr 16 23:59:37.082601 kubelet[2810]: I0416 23:59:37.082272 2810 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6665b3da-59b1-4822-b674-5cad17d5fa80-cilium-cgroup\") pod \"6665b3da-59b1-4822-b674-5cad17d5fa80\" (UID: \"6665b3da-59b1-4822-b674-5cad17d5fa80\") " Apr 16 23:59:37.082601 kubelet[2810]: I0416 23:59:37.082312 2810 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6665b3da-59b1-4822-b674-5cad17d5fa80-clustermesh-secrets\") pod \"6665b3da-59b1-4822-b674-5cad17d5fa80\" (UID: \"6665b3da-59b1-4822-b674-5cad17d5fa80\") " Apr 16 23:59:37.082601 kubelet[2810]: I0416 23:59:37.082345 2810 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6665b3da-59b1-4822-b674-5cad17d5fa80-cilium-config-path\") pod \"6665b3da-59b1-4822-b674-5cad17d5fa80\" (UID: \"6665b3da-59b1-4822-b674-5cad17d5fa80\") " Apr 16 23:59:37.082601 kubelet[2810]: I0416 23:59:37.082370 2810 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6665b3da-59b1-4822-b674-5cad17d5fa80-hostproc\") pod \"6665b3da-59b1-4822-b674-5cad17d5fa80\" (UID: \"6665b3da-59b1-4822-b674-5cad17d5fa80\") " Apr 16 23:59:37.082601 kubelet[2810]: I0416 23:59:37.082397 2810 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6665b3da-59b1-4822-b674-5cad17d5fa80-etc-cni-netd\") pod \"6665b3da-59b1-4822-b674-5cad17d5fa80\" (UID: \"6665b3da-59b1-4822-b674-5cad17d5fa80\") " Apr 16 23:59:37.082601 kubelet[2810]: I0416 23:59:37.082430 2810 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxwqb\" (UniqueName: \"kubernetes.io/projected/6665b3da-59b1-4822-b674-5cad17d5fa80-kube-api-access-lxwqb\") pod \"6665b3da-59b1-4822-b674-5cad17d5fa80\" (UID: \"6665b3da-59b1-4822-b674-5cad17d5fa80\") " Apr 16 23:59:37.082840 kubelet[2810]: I0416 23:59:37.082461 2810 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6665b3da-59b1-4822-b674-5cad17d5fa80-lib-modules\") pod \"6665b3da-59b1-4822-b674-5cad17d5fa80\" (UID: \"6665b3da-59b1-4822-b674-5cad17d5fa80\") " Apr 16 23:59:37.082840 kubelet[2810]: I0416 23:59:37.082493 2810 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6665b3da-59b1-4822-b674-5cad17d5fa80-hubble-tls\") pod \"6665b3da-59b1-4822-b674-5cad17d5fa80\" (UID: \"6665b3da-59b1-4822-b674-5cad17d5fa80\") " Apr 16 23:59:37.082840 kubelet[2810]: I0416 23:59:37.082548 2810 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/94324002-ccbd-4870-8023-0b10f455c0e6-cilium-config-path\") on node \"ci-4459-2-4-n-3f94367fd3\" DevicePath \"\"" Apr 16 23:59:37.082840 kubelet[2810]: I0416 23:59:37.082569 2810 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w4kt9\" (UniqueName: \"kubernetes.io/projected/94324002-ccbd-4870-8023-0b10f455c0e6-kube-api-access-w4kt9\") on node \"ci-4459-2-4-n-3f94367fd3\" DevicePath \"\"" Apr 16 23:59:37.086402 kubelet[2810]: I0416 23:59:37.084366 2810 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6665b3da-59b1-4822-b674-5cad17d5fa80-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6665b3da-59b1-4822-b674-5cad17d5fa80" (UID: "6665b3da-59b1-4822-b674-5cad17d5fa80"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 23:59:37.086402 kubelet[2810]: I0416 23:59:37.085202 2810 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6665b3da-59b1-4822-b674-5cad17d5fa80-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6665b3da-59b1-4822-b674-5cad17d5fa80" (UID: "6665b3da-59b1-4822-b674-5cad17d5fa80"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 23:59:37.086402 kubelet[2810]: I0416 23:59:37.085236 2810 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6665b3da-59b1-4822-b674-5cad17d5fa80-cni-path" (OuterVolumeSpecName: "cni-path") pod "6665b3da-59b1-4822-b674-5cad17d5fa80" (UID: "6665b3da-59b1-4822-b674-5cad17d5fa80"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 23:59:37.086402 kubelet[2810]: I0416 23:59:37.085263 2810 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6665b3da-59b1-4822-b674-5cad17d5fa80-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6665b3da-59b1-4822-b674-5cad17d5fa80" (UID: "6665b3da-59b1-4822-b674-5cad17d5fa80"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 23:59:37.086402 kubelet[2810]: I0416 23:59:37.085477 2810 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6665b3da-59b1-4822-b674-5cad17d5fa80-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6665b3da-59b1-4822-b674-5cad17d5fa80" (UID: "6665b3da-59b1-4822-b674-5cad17d5fa80"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 23:59:37.086695 kubelet[2810]: I0416 23:59:37.085499 2810 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6665b3da-59b1-4822-b674-5cad17d5fa80-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6665b3da-59b1-4822-b674-5cad17d5fa80" (UID: "6665b3da-59b1-4822-b674-5cad17d5fa80"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 23:59:37.086695 kubelet[2810]: I0416 23:59:37.085529 2810 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6665b3da-59b1-4822-b674-5cad17d5fa80-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6665b3da-59b1-4822-b674-5cad17d5fa80" (UID: "6665b3da-59b1-4822-b674-5cad17d5fa80"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 23:59:37.086695 kubelet[2810]: I0416 23:59:37.085552 2810 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6665b3da-59b1-4822-b674-5cad17d5fa80-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6665b3da-59b1-4822-b674-5cad17d5fa80" (UID: "6665b3da-59b1-4822-b674-5cad17d5fa80"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 23:59:37.089577 kubelet[2810]: I0416 23:59:37.089526 2810 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6665b3da-59b1-4822-b674-5cad17d5fa80-hostproc" (OuterVolumeSpecName: "hostproc") pod "6665b3da-59b1-4822-b674-5cad17d5fa80" (UID: "6665b3da-59b1-4822-b674-5cad17d5fa80"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 23:59:37.089703 kubelet[2810]: I0416 23:59:37.089671 2810 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6665b3da-59b1-4822-b674-5cad17d5fa80-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6665b3da-59b1-4822-b674-5cad17d5fa80" (UID: "6665b3da-59b1-4822-b674-5cad17d5fa80"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 23:59:37.089754 kubelet[2810]: I0416 23:59:37.089709 2810 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6665b3da-59b1-4822-b674-5cad17d5fa80-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6665b3da-59b1-4822-b674-5cad17d5fa80" (UID: "6665b3da-59b1-4822-b674-5cad17d5fa80"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 23:59:37.093282 kubelet[2810]: I0416 23:59:37.093245 2810 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6665b3da-59b1-4822-b674-5cad17d5fa80-kube-api-access-lxwqb" (OuterVolumeSpecName: "kube-api-access-lxwqb") pod "6665b3da-59b1-4822-b674-5cad17d5fa80" (UID: "6665b3da-59b1-4822-b674-5cad17d5fa80"). InnerVolumeSpecName "kube-api-access-lxwqb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 23:59:37.094261 kubelet[2810]: I0416 23:59:37.093333 2810 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6665b3da-59b1-4822-b674-5cad17d5fa80-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6665b3da-59b1-4822-b674-5cad17d5fa80" (UID: "6665b3da-59b1-4822-b674-5cad17d5fa80"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 23:59:37.095399 kubelet[2810]: I0416 23:59:37.095337 2810 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6665b3da-59b1-4822-b674-5cad17d5fa80-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6665b3da-59b1-4822-b674-5cad17d5fa80" (UID: "6665b3da-59b1-4822-b674-5cad17d5fa80"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 23:59:37.158754 kubelet[2810]: I0416 23:59:37.158699 2810 scope.go:117] "RemoveContainer" containerID="33e0483b61c401334bef304f8ddb6c8858837c8229104b93a9810224854d9710" Apr 16 23:59:37.168872 systemd[1]: Removed slice kubepods-burstable-pod6665b3da_59b1_4822_b674_5cad17d5fa80.slice - libcontainer container kubepods-burstable-pod6665b3da_59b1_4822_b674_5cad17d5fa80.slice. Apr 16 23:59:37.169313 systemd[1]: kubepods-burstable-pod6665b3da_59b1_4822_b674_5cad17d5fa80.slice: Consumed 4.828s CPU time, 128.5M memory peak, 112K read from disk, 14M written to disk. Apr 16 23:59:37.169636 containerd[1631]: time="2026-04-16T23:59:37.169200894Z" level=info msg="RemoveContainer for \"33e0483b61c401334bef304f8ddb6c8858837c8229104b93a9810224854d9710\"" Apr 16 23:59:37.172579 systemd[1]: Removed slice kubepods-besteffort-pod94324002_ccbd_4870_8023_0b10f455c0e6.slice - libcontainer container kubepods-besteffort-pod94324002_ccbd_4870_8023_0b10f455c0e6.slice. Apr 16 23:59:37.178764 containerd[1631]: time="2026-04-16T23:59:37.178688269Z" level=info msg="RemoveContainer for \"33e0483b61c401334bef304f8ddb6c8858837c8229104b93a9810224854d9710\" returns successfully" Apr 16 23:59:37.178906 kubelet[2810]: I0416 23:59:37.178873 2810 scope.go:117] "RemoveContainer" containerID="b93ba752df6327e34e8a0be6fd3855f5849e65fc8b216cb52a179cb4d3a74622" Apr 16 23:59:37.180307 containerd[1631]: time="2026-04-16T23:59:37.180183730Z" level=info msg="RemoveContainer for \"b93ba752df6327e34e8a0be6fd3855f5849e65fc8b216cb52a179cb4d3a74622\"" Apr 16 23:59:37.182990 kubelet[2810]: I0416 23:59:37.182961 2810 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6665b3da-59b1-4822-b674-5cad17d5fa80-lib-modules\") on node \"ci-4459-2-4-n-3f94367fd3\" DevicePath \"\"" Apr 16 23:59:37.182990 kubelet[2810]: I0416 23:59:37.182990 2810 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6665b3da-59b1-4822-b674-5cad17d5fa80-hubble-tls\") on node \"ci-4459-2-4-n-3f94367fd3\" DevicePath \"\"" Apr 16 23:59:37.183103 kubelet[2810]: I0416 23:59:37.183002 2810 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6665b3da-59b1-4822-b674-5cad17d5fa80-xtables-lock\") on node \"ci-4459-2-4-n-3f94367fd3\" DevicePath \"\"" Apr 16 23:59:37.183103 kubelet[2810]: I0416 23:59:37.183012 2810 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6665b3da-59b1-4822-b674-5cad17d5fa80-cni-path\") on node \"ci-4459-2-4-n-3f94367fd3\" DevicePath \"\"" Apr 16 23:59:37.183103 kubelet[2810]: I0416 23:59:37.183022 2810 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6665b3da-59b1-4822-b674-5cad17d5fa80-cilium-run\") on node \"ci-4459-2-4-n-3f94367fd3\" DevicePath \"\"" Apr 16 23:59:37.183103 kubelet[2810]: I0416 23:59:37.183030 2810 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6665b3da-59b1-4822-b674-5cad17d5fa80-bpf-maps\") on node \"ci-4459-2-4-n-3f94367fd3\" DevicePath \"\"" Apr 16 23:59:37.183103 kubelet[2810]: I0416 23:59:37.183039 2810 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6665b3da-59b1-4822-b674-5cad17d5fa80-host-proc-sys-kernel\") on node \"ci-4459-2-4-n-3f94367fd3\" DevicePath \"\"" Apr 16 23:59:37.183103 kubelet[2810]: I0416 23:59:37.183051 2810 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6665b3da-59b1-4822-b674-5cad17d5fa80-host-proc-sys-net\") on node \"ci-4459-2-4-n-3f94367fd3\" DevicePath \"\"" Apr 16 23:59:37.183103 kubelet[2810]: I0416 23:59:37.183062 2810 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6665b3da-59b1-4822-b674-5cad17d5fa80-cilium-cgroup\") on node \"ci-4459-2-4-n-3f94367fd3\" DevicePath \"\"" Apr 16 23:59:37.183103 kubelet[2810]: I0416 23:59:37.183073 2810 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6665b3da-59b1-4822-b674-5cad17d5fa80-clustermesh-secrets\") on node \"ci-4459-2-4-n-3f94367fd3\" DevicePath \"\"" Apr 16 23:59:37.183379 kubelet[2810]: I0416 23:59:37.183083 2810 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6665b3da-59b1-4822-b674-5cad17d5fa80-cilium-config-path\") on node \"ci-4459-2-4-n-3f94367fd3\" DevicePath \"\"" Apr 16 23:59:37.183379 kubelet[2810]: I0416 23:59:37.183092 2810 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6665b3da-59b1-4822-b674-5cad17d5fa80-hostproc\") on node \"ci-4459-2-4-n-3f94367fd3\" DevicePath \"\"" Apr 16 23:59:37.183379 kubelet[2810]: I0416 23:59:37.183100 2810 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6665b3da-59b1-4822-b674-5cad17d5fa80-etc-cni-netd\") on node \"ci-4459-2-4-n-3f94367fd3\" DevicePath \"\"" Apr 16 23:59:37.183379 kubelet[2810]: I0416 23:59:37.183134 2810 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lxwqb\" (UniqueName: \"kubernetes.io/projected/6665b3da-59b1-4822-b674-5cad17d5fa80-kube-api-access-lxwqb\") on node \"ci-4459-2-4-n-3f94367fd3\" DevicePath \"\"" Apr 16 23:59:37.187839 containerd[1631]: time="2026-04-16T23:59:37.187770136Z" level=info msg="RemoveContainer for \"b93ba752df6327e34e8a0be6fd3855f5849e65fc8b216cb52a179cb4d3a74622\" returns successfully" Apr 16 23:59:37.188261 kubelet[2810]: I0416 23:59:37.188002 2810 scope.go:117] "RemoveContainer" containerID="8b08c3198347dcf3b3aca9b30baeb809599844b8b7f4b663ca814978b051941f" Apr 16 23:59:37.191622 containerd[1631]: time="2026-04-16T23:59:37.191267176Z" level=info msg="RemoveContainer for \"8b08c3198347dcf3b3aca9b30baeb809599844b8b7f4b663ca814978b051941f\"" Apr 16 23:59:37.196858 containerd[1631]: time="2026-04-16T23:59:37.196824154Z" level=info msg="RemoveContainer for \"8b08c3198347dcf3b3aca9b30baeb809599844b8b7f4b663ca814978b051941f\" returns successfully" Apr 16 23:59:37.197048 kubelet[2810]: I0416 23:59:37.197026 2810 scope.go:117] "RemoveContainer" containerID="95812d222104ab8a15c703c00f997b2cebc7852d49c1582cb2a01f147070aa8e" Apr 16 23:59:37.198503 containerd[1631]: time="2026-04-16T23:59:37.198490054Z" level=info msg="RemoveContainer for \"95812d222104ab8a15c703c00f997b2cebc7852d49c1582cb2a01f147070aa8e\"" Apr 16 23:59:37.201788 containerd[1631]: time="2026-04-16T23:59:37.201718195Z" level=info msg="RemoveContainer for \"95812d222104ab8a15c703c00f997b2cebc7852d49c1582cb2a01f147070aa8e\" returns successfully" Apr 16 23:59:37.201954 kubelet[2810]: I0416 23:59:37.201932 2810 scope.go:117] "RemoveContainer" containerID="f5283733a8b7edef763d6fa51a9eaee7be0f6de0c67c4eca4e632eafa0c0e59a" Apr 16 23:59:37.203190 containerd[1631]: time="2026-04-16T23:59:37.203178727Z" level=info msg="RemoveContainer for \"f5283733a8b7edef763d6fa51a9eaee7be0f6de0c67c4eca4e632eafa0c0e59a\"" Apr 16 23:59:37.206296 containerd[1631]: time="2026-04-16T23:59:37.206282859Z" level=info msg="RemoveContainer for \"f5283733a8b7edef763d6fa51a9eaee7be0f6de0c67c4eca4e632eafa0c0e59a\" returns successfully" Apr 16 23:59:37.206400 kubelet[2810]: I0416 23:59:37.206383 2810 scope.go:117] "RemoveContainer" containerID="33e0483b61c401334bef304f8ddb6c8858837c8229104b93a9810224854d9710" Apr 16 23:59:37.206564 containerd[1631]: time="2026-04-16T23:59:37.206540757Z" level=error msg="ContainerStatus for \"33e0483b61c401334bef304f8ddb6c8858837c8229104b93a9810224854d9710\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"33e0483b61c401334bef304f8ddb6c8858837c8229104b93a9810224854d9710\": not found" Apr 16 23:59:37.206666 kubelet[2810]: E0416 23:59:37.206640 2810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"33e0483b61c401334bef304f8ddb6c8858837c8229104b93a9810224854d9710\": not found" containerID="33e0483b61c401334bef304f8ddb6c8858837c8229104b93a9810224854d9710" Apr 16 23:59:37.206708 kubelet[2810]: I0416 23:59:37.206660 2810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"33e0483b61c401334bef304f8ddb6c8858837c8229104b93a9810224854d9710"} err="failed to get container status \"33e0483b61c401334bef304f8ddb6c8858837c8229104b93a9810224854d9710\": rpc error: code = NotFound desc = an error occurred when try to find container \"33e0483b61c401334bef304f8ddb6c8858837c8229104b93a9810224854d9710\": not found" Apr 16 23:59:37.206708 kubelet[2810]: I0416 23:59:37.206698 2810 scope.go:117] "RemoveContainer" containerID="b93ba752df6327e34e8a0be6fd3855f5849e65fc8b216cb52a179cb4d3a74622" Apr 16 23:59:37.206818 containerd[1631]: time="2026-04-16T23:59:37.206788976Z" level=error msg="ContainerStatus for \"b93ba752df6327e34e8a0be6fd3855f5849e65fc8b216cb52a179cb4d3a74622\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b93ba752df6327e34e8a0be6fd3855f5849e65fc8b216cb52a179cb4d3a74622\": not found" Apr 16 23:59:37.206939 kubelet[2810]: E0416 23:59:37.206892 2810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b93ba752df6327e34e8a0be6fd3855f5849e65fc8b216cb52a179cb4d3a74622\": not found" containerID="b93ba752df6327e34e8a0be6fd3855f5849e65fc8b216cb52a179cb4d3a74622" Apr 16 23:59:37.206964 kubelet[2810]: I0416 23:59:37.206941 2810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b93ba752df6327e34e8a0be6fd3855f5849e65fc8b216cb52a179cb4d3a74622"} err="failed to get container status \"b93ba752df6327e34e8a0be6fd3855f5849e65fc8b216cb52a179cb4d3a74622\": rpc error: code = NotFound desc = an error occurred when try to find container \"b93ba752df6327e34e8a0be6fd3855f5849e65fc8b216cb52a179cb4d3a74622\": not found" Apr 16 23:59:37.206964 kubelet[2810]: I0416 23:59:37.206951 2810 scope.go:117] "RemoveContainer" containerID="8b08c3198347dcf3b3aca9b30baeb809599844b8b7f4b663ca814978b051941f" Apr 16 23:59:37.207103 containerd[1631]: time="2026-04-16T23:59:37.207075614Z" level=error msg="ContainerStatus for \"8b08c3198347dcf3b3aca9b30baeb809599844b8b7f4b663ca814978b051941f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8b08c3198347dcf3b3aca9b30baeb809599844b8b7f4b663ca814978b051941f\": not found" Apr 16 23:59:37.207269 kubelet[2810]: E0416 23:59:37.207249 2810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8b08c3198347dcf3b3aca9b30baeb809599844b8b7f4b663ca814978b051941f\": not found" containerID="8b08c3198347dcf3b3aca9b30baeb809599844b8b7f4b663ca814978b051941f" Apr 16 23:59:37.207298 kubelet[2810]: I0416 23:59:37.207273 2810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8b08c3198347dcf3b3aca9b30baeb809599844b8b7f4b663ca814978b051941f"} err="failed to get container status \"8b08c3198347dcf3b3aca9b30baeb809599844b8b7f4b663ca814978b051941f\": rpc error: code = NotFound desc = an error occurred when try to find container \"8b08c3198347dcf3b3aca9b30baeb809599844b8b7f4b663ca814978b051941f\": not found" Apr 16 23:59:37.207298 kubelet[2810]: I0416 23:59:37.207281 2810 scope.go:117] "RemoveContainer" containerID="95812d222104ab8a15c703c00f997b2cebc7852d49c1582cb2a01f147070aa8e" Apr 16 23:59:37.207425 containerd[1631]: time="2026-04-16T23:59:37.207402282Z" level=error msg="ContainerStatus for \"95812d222104ab8a15c703c00f997b2cebc7852d49c1582cb2a01f147070aa8e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"95812d222104ab8a15c703c00f997b2cebc7852d49c1582cb2a01f147070aa8e\": not found" Apr 16 23:59:37.207527 kubelet[2810]: E0416 23:59:37.207508 2810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"95812d222104ab8a15c703c00f997b2cebc7852d49c1582cb2a01f147070aa8e\": not found" containerID="95812d222104ab8a15c703c00f997b2cebc7852d49c1582cb2a01f147070aa8e" Apr 16 23:59:37.207527 kubelet[2810]: I0416 23:59:37.207523 2810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"95812d222104ab8a15c703c00f997b2cebc7852d49c1582cb2a01f147070aa8e"} err="failed to get container status \"95812d222104ab8a15c703c00f997b2cebc7852d49c1582cb2a01f147070aa8e\": rpc error: code = NotFound desc = an error occurred when try to find container \"95812d222104ab8a15c703c00f997b2cebc7852d49c1582cb2a01f147070aa8e\": not found" Apr 16 23:59:37.207616 kubelet[2810]: I0416 23:59:37.207532 2810 scope.go:117] "RemoveContainer" containerID="f5283733a8b7edef763d6fa51a9eaee7be0f6de0c67c4eca4e632eafa0c0e59a" Apr 16 23:59:37.207702 containerd[1631]: time="2026-04-16T23:59:37.207679221Z" level=error msg="ContainerStatus for \"f5283733a8b7edef763d6fa51a9eaee7be0f6de0c67c4eca4e632eafa0c0e59a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f5283733a8b7edef763d6fa51a9eaee7be0f6de0c67c4eca4e632eafa0c0e59a\": not found" Apr 16 23:59:37.207781 kubelet[2810]: E0416 23:59:37.207764 2810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f5283733a8b7edef763d6fa51a9eaee7be0f6de0c67c4eca4e632eafa0c0e59a\": not found" containerID="f5283733a8b7edef763d6fa51a9eaee7be0f6de0c67c4eca4e632eafa0c0e59a" Apr 16 23:59:37.207832 kubelet[2810]: I0416 23:59:37.207779 2810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f5283733a8b7edef763d6fa51a9eaee7be0f6de0c67c4eca4e632eafa0c0e59a"} err="failed to get container status \"f5283733a8b7edef763d6fa51a9eaee7be0f6de0c67c4eca4e632eafa0c0e59a\": rpc error: code = NotFound desc = an error occurred when try to find container \"f5283733a8b7edef763d6fa51a9eaee7be0f6de0c67c4eca4e632eafa0c0e59a\": not found" Apr 16 23:59:37.207832 kubelet[2810]: I0416 23:59:37.207788 2810 scope.go:117] "RemoveContainer" containerID="fb40f21b6fc0d21348a675b407f69b743f67bf28fd1a67be8d070ae4d1f1fdde" Apr 16 23:59:37.208697 containerd[1631]: time="2026-04-16T23:59:37.208676665Z" level=info msg="RemoveContainer for \"fb40f21b6fc0d21348a675b407f69b743f67bf28fd1a67be8d070ae4d1f1fdde\"" Apr 16 23:59:37.211735 containerd[1631]: time="2026-04-16T23:59:37.211686887Z" level=info msg="RemoveContainer for \"fb40f21b6fc0d21348a675b407f69b743f67bf28fd1a67be8d070ae4d1f1fdde\" returns successfully" Apr 16 23:59:37.211847 kubelet[2810]: I0416 23:59:37.211827 2810 scope.go:117] "RemoveContainer" containerID="fb40f21b6fc0d21348a675b407f69b743f67bf28fd1a67be8d070ae4d1f1fdde" Apr 16 23:59:37.211978 containerd[1631]: time="2026-04-16T23:59:37.211959066Z" level=error msg="ContainerStatus for \"fb40f21b6fc0d21348a675b407f69b743f67bf28fd1a67be8d070ae4d1f1fdde\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fb40f21b6fc0d21348a675b407f69b743f67bf28fd1a67be8d070ae4d1f1fdde\": not found" Apr 16 23:59:37.212089 kubelet[2810]: E0416 23:59:37.212067 2810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fb40f21b6fc0d21348a675b407f69b743f67bf28fd1a67be8d070ae4d1f1fdde\": not found" containerID="fb40f21b6fc0d21348a675b407f69b743f67bf28fd1a67be8d070ae4d1f1fdde" Apr 16 23:59:37.212089 kubelet[2810]: I0416 23:59:37.212082 2810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fb40f21b6fc0d21348a675b407f69b743f67bf28fd1a67be8d070ae4d1f1fdde"} err="failed to get container status \"fb40f21b6fc0d21348a675b407f69b743f67bf28fd1a67be8d070ae4d1f1fdde\": rpc error: code = NotFound desc = an error occurred when try to find container \"fb40f21b6fc0d21348a675b407f69b743f67bf28fd1a67be8d070ae4d1f1fdde\": not found" Apr 16 23:59:37.827297 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cad2566d1dabb86801d5361fa53d42c485141d0c1dedff3047a278f1fbf87ada-shm.mount: Deactivated successfully. Apr 16 23:59:37.827403 systemd[1]: var-lib-kubelet-pods-6665b3da\x2d59b1\x2d4822\x2db674\x2d5cad17d5fa80-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlxwqb.mount: Deactivated successfully. Apr 16 23:59:37.827483 systemd[1]: var-lib-kubelet-pods-94324002\x2dccbd\x2d4870\x2d8023\x2d0b10f455c0e6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw4kt9.mount: Deactivated successfully. Apr 16 23:59:37.828076 systemd[1]: var-lib-kubelet-pods-6665b3da\x2d59b1\x2d4822\x2db674\x2d5cad17d5fa80-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 16 23:59:37.828178 systemd[1]: var-lib-kubelet-pods-6665b3da\x2d59b1\x2d4822\x2db674\x2d5cad17d5fa80-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 16 23:59:37.833463 kubelet[2810]: I0416 23:59:37.833404 2810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6665b3da-59b1-4822-b674-5cad17d5fa80" path="/var/lib/kubelet/pods/6665b3da-59b1-4822-b674-5cad17d5fa80/volumes" Apr 16 23:59:37.834241 kubelet[2810]: I0416 23:59:37.834221 2810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94324002-ccbd-4870-8023-0b10f455c0e6" path="/var/lib/kubelet/pods/94324002-ccbd-4870-8023-0b10f455c0e6/volumes" Apr 16 23:59:37.919659 kubelet[2810]: E0416 23:59:37.919609 2810 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 23:59:38.743530 sshd[4352]: Connection closed by 4.175.71.9 port 50760 Apr 16 23:59:38.744577 sshd-session[4349]: pam_unix(sshd:session): session closed for user core Apr 16 23:59:38.752555 systemd-logind[1606]: Session 23 logged out. Waiting for processes to exit. Apr 16 23:59:38.753675 systemd[1]: sshd@22-77.42.22.14:22-4.175.71.9:50760.service: Deactivated successfully. Apr 16 23:59:38.757775 systemd[1]: session-23.scope: Deactivated successfully. Apr 16 23:59:38.761637 systemd-logind[1606]: Removed session 23. Apr 16 23:59:38.790034 systemd[1]: Started sshd@23-77.42.22.14:22-4.175.71.9:55408.service - OpenSSH per-connection server daemon (4.175.71.9:55408). Apr 16 23:59:38.991196 sshd[4495]: Accepted publickey for core from 4.175.71.9 port 55408 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:59:38.993272 sshd-session[4495]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:59:39.002525 systemd-logind[1606]: New session 24 of user core. Apr 16 23:59:39.011355 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 16 23:59:39.443079 systemd[1]: Created slice kubepods-burstable-pod303deb83_4d13_4daf_a912_e560cc7382d6.slice - libcontainer container kubepods-burstable-pod303deb83_4d13_4daf_a912_e560cc7382d6.slice. Apr 16 23:59:39.445274 sshd[4498]: Connection closed by 4.175.71.9 port 55408 Apr 16 23:59:39.449259 sshd-session[4495]: pam_unix(sshd:session): session closed for user core Apr 16 23:59:39.454209 systemd-logind[1606]: Session 24 logged out. Waiting for processes to exit. Apr 16 23:59:39.454836 systemd[1]: sshd@23-77.42.22.14:22-4.175.71.9:55408.service: Deactivated successfully. Apr 16 23:59:39.456790 systemd[1]: session-24.scope: Deactivated successfully. Apr 16 23:59:39.458669 systemd-logind[1606]: Removed session 24. Apr 16 23:59:39.489515 systemd[1]: Started sshd@24-77.42.22.14:22-4.175.71.9:55414.service - OpenSSH per-connection server daemon (4.175.71.9:55414). Apr 16 23:59:39.499571 kubelet[2810]: I0416 23:59:39.499538 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/303deb83-4d13-4daf-a912-e560cc7382d6-hostproc\") pod \"cilium-95mdd\" (UID: \"303deb83-4d13-4daf-a912-e560cc7382d6\") " pod="kube-system/cilium-95mdd" Apr 16 23:59:39.499571 kubelet[2810]: I0416 23:59:39.499568 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/303deb83-4d13-4daf-a912-e560cc7382d6-cilium-ipsec-secrets\") pod \"cilium-95mdd\" (UID: \"303deb83-4d13-4daf-a912-e560cc7382d6\") " pod="kube-system/cilium-95mdd" Apr 16 23:59:39.499899 kubelet[2810]: I0416 23:59:39.499583 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/303deb83-4d13-4daf-a912-e560cc7382d6-etc-cni-netd\") pod \"cilium-95mdd\" (UID: \"303deb83-4d13-4daf-a912-e560cc7382d6\") " pod="kube-system/cilium-95mdd" Apr 16 23:59:39.499899 kubelet[2810]: I0416 23:59:39.499593 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/303deb83-4d13-4daf-a912-e560cc7382d6-bpf-maps\") pod \"cilium-95mdd\" (UID: \"303deb83-4d13-4daf-a912-e560cc7382d6\") " pod="kube-system/cilium-95mdd" Apr 16 23:59:39.499899 kubelet[2810]: I0416 23:59:39.499609 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p49mn\" (UniqueName: \"kubernetes.io/projected/303deb83-4d13-4daf-a912-e560cc7382d6-kube-api-access-p49mn\") pod \"cilium-95mdd\" (UID: \"303deb83-4d13-4daf-a912-e560cc7382d6\") " pod="kube-system/cilium-95mdd" Apr 16 23:59:39.499899 kubelet[2810]: I0416 23:59:39.499628 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/303deb83-4d13-4daf-a912-e560cc7382d6-cilium-cgroup\") pod \"cilium-95mdd\" (UID: \"303deb83-4d13-4daf-a912-e560cc7382d6\") " pod="kube-system/cilium-95mdd" Apr 16 23:59:39.499899 kubelet[2810]: I0416 23:59:39.499640 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/303deb83-4d13-4daf-a912-e560cc7382d6-cilium-run\") pod \"cilium-95mdd\" (UID: \"303deb83-4d13-4daf-a912-e560cc7382d6\") " pod="kube-system/cilium-95mdd" Apr 16 23:59:39.499899 kubelet[2810]: I0416 23:59:39.499650 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/303deb83-4d13-4daf-a912-e560cc7382d6-xtables-lock\") pod \"cilium-95mdd\" (UID: \"303deb83-4d13-4daf-a912-e560cc7382d6\") " pod="kube-system/cilium-95mdd" Apr 16 23:59:39.500009 kubelet[2810]: I0416 23:59:39.499660 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/303deb83-4d13-4daf-a912-e560cc7382d6-clustermesh-secrets\") pod \"cilium-95mdd\" (UID: \"303deb83-4d13-4daf-a912-e560cc7382d6\") " pod="kube-system/cilium-95mdd" Apr 16 23:59:39.500009 kubelet[2810]: I0416 23:59:39.499671 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/303deb83-4d13-4daf-a912-e560cc7382d6-host-proc-sys-kernel\") pod \"cilium-95mdd\" (UID: \"303deb83-4d13-4daf-a912-e560cc7382d6\") " pod="kube-system/cilium-95mdd" Apr 16 23:59:39.500009 kubelet[2810]: I0416 23:59:39.499682 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/303deb83-4d13-4daf-a912-e560cc7382d6-hubble-tls\") pod \"cilium-95mdd\" (UID: \"303deb83-4d13-4daf-a912-e560cc7382d6\") " pod="kube-system/cilium-95mdd" Apr 16 23:59:39.500009 kubelet[2810]: I0416 23:59:39.499693 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/303deb83-4d13-4daf-a912-e560cc7382d6-cilium-config-path\") pod \"cilium-95mdd\" (UID: \"303deb83-4d13-4daf-a912-e560cc7382d6\") " pod="kube-system/cilium-95mdd" Apr 16 23:59:39.500009 kubelet[2810]: I0416 23:59:39.499703 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/303deb83-4d13-4daf-a912-e560cc7382d6-cni-path\") pod \"cilium-95mdd\" (UID: \"303deb83-4d13-4daf-a912-e560cc7382d6\") " pod="kube-system/cilium-95mdd" Apr 16 23:59:39.500009 kubelet[2810]: I0416 23:59:39.499713 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/303deb83-4d13-4daf-a912-e560cc7382d6-lib-modules\") pod \"cilium-95mdd\" (UID: \"303deb83-4d13-4daf-a912-e560cc7382d6\") " pod="kube-system/cilium-95mdd" Apr 16 23:59:39.500125 kubelet[2810]: I0416 23:59:39.499722 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/303deb83-4d13-4daf-a912-e560cc7382d6-host-proc-sys-net\") pod \"cilium-95mdd\" (UID: \"303deb83-4d13-4daf-a912-e560cc7382d6\") " pod="kube-system/cilium-95mdd" Apr 16 23:59:39.679464 sshd[4509]: Accepted publickey for core from 4.175.71.9 port 55414 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:59:39.682146 sshd-session[4509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:59:39.690359 systemd-logind[1606]: New session 25 of user core. Apr 16 23:59:39.697350 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 16 23:59:39.712319 kubelet[2810]: I0416 23:59:39.712248 2810 setters.go:618] "Node became not ready" node="ci-4459-2-4-n-3f94367fd3" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-16T23:59:39Z","lastTransitionTime":"2026-04-16T23:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 16 23:59:39.748780 containerd[1631]: time="2026-04-16T23:59:39.748707634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-95mdd,Uid:303deb83-4d13-4daf-a912-e560cc7382d6,Namespace:kube-system,Attempt:0,}" Apr 16 23:59:39.770319 containerd[1631]: time="2026-04-16T23:59:39.770205393Z" level=info msg="connecting to shim 22b06b13be1418e25c3736de0b3ade4779a0c86a4e22194f896a61f62d8f018a" address="unix:///run/containerd/s/b48ecf05616dfb80a2e01bc7eb5de67e363c6fd89ab59dd7b4aedc87e0c50bdb" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:59:39.775643 sshd[4519]: Connection closed by 4.175.71.9 port 55414 Apr 16 23:59:39.775710 sshd-session[4509]: pam_unix(sshd:session): session closed for user core Apr 16 23:59:39.782651 systemd[1]: sshd@24-77.42.22.14:22-4.175.71.9:55414.service: Deactivated successfully. Apr 16 23:59:39.785403 systemd[1]: session-25.scope: Deactivated successfully. Apr 16 23:59:39.789407 systemd-logind[1606]: Session 25 logged out. Waiting for processes to exit. Apr 16 23:59:39.791267 systemd-logind[1606]: Removed session 25. Apr 16 23:59:39.797236 systemd[1]: Started cri-containerd-22b06b13be1418e25c3736de0b3ade4779a0c86a4e22194f896a61f62d8f018a.scope - libcontainer container 22b06b13be1418e25c3736de0b3ade4779a0c86a4e22194f896a61f62d8f018a. Apr 16 23:59:39.810130 systemd[1]: Started sshd@25-77.42.22.14:22-4.175.71.9:55420.service - OpenSSH per-connection server daemon (4.175.71.9:55420). Apr 16 23:59:39.827223 containerd[1631]: time="2026-04-16T23:59:39.827191615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-95mdd,Uid:303deb83-4d13-4daf-a912-e560cc7382d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"22b06b13be1418e25c3736de0b3ade4779a0c86a4e22194f896a61f62d8f018a\"" Apr 16 23:59:39.832389 containerd[1631]: time="2026-04-16T23:59:39.832315596Z" level=info msg="CreateContainer within sandbox \"22b06b13be1418e25c3736de0b3ade4779a0c86a4e22194f896a61f62d8f018a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 16 23:59:39.838566 containerd[1631]: time="2026-04-16T23:59:39.838524331Z" level=info msg="Container b98a16ed7d5d1df37b7e93b67dd470a3a79c342ddf06476c3b493583642fa64a: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:59:39.841924 containerd[1631]: time="2026-04-16T23:59:39.841891392Z" level=info msg="CreateContainer within sandbox \"22b06b13be1418e25c3736de0b3ade4779a0c86a4e22194f896a61f62d8f018a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b98a16ed7d5d1df37b7e93b67dd470a3a79c342ddf06476c3b493583642fa64a\"" Apr 16 23:59:39.842450 containerd[1631]: time="2026-04-16T23:59:39.842346470Z" level=info msg="StartContainer for \"b98a16ed7d5d1df37b7e93b67dd470a3a79c342ddf06476c3b493583642fa64a\"" Apr 16 23:59:39.843085 containerd[1631]: time="2026-04-16T23:59:39.843069816Z" level=info msg="connecting to shim b98a16ed7d5d1df37b7e93b67dd470a3a79c342ddf06476c3b493583642fa64a" address="unix:///run/containerd/s/b48ecf05616dfb80a2e01bc7eb5de67e363c6fd89ab59dd7b4aedc87e0c50bdb" protocol=ttrpc version=3 Apr 16 23:59:39.861255 systemd[1]: Started cri-containerd-b98a16ed7d5d1df37b7e93b67dd470a3a79c342ddf06476c3b493583642fa64a.scope - libcontainer container b98a16ed7d5d1df37b7e93b67dd470a3a79c342ddf06476c3b493583642fa64a. Apr 16 23:59:39.887560 containerd[1631]: time="2026-04-16T23:59:39.887513777Z" level=info msg="StartContainer for \"b98a16ed7d5d1df37b7e93b67dd470a3a79c342ddf06476c3b493583642fa64a\" returns successfully" Apr 16 23:59:39.895793 systemd[1]: cri-containerd-b98a16ed7d5d1df37b7e93b67dd470a3a79c342ddf06476c3b493583642fa64a.scope: Deactivated successfully. Apr 16 23:59:39.898390 containerd[1631]: time="2026-04-16T23:59:39.898264507Z" level=info msg="received container exit event container_id:\"b98a16ed7d5d1df37b7e93b67dd470a3a79c342ddf06476c3b493583642fa64a\" id:\"b98a16ed7d5d1df37b7e93b67dd470a3a79c342ddf06476c3b493583642fa64a\" pid:4589 exited_at:{seconds:1776383979 nanos:897973329}" Apr 16 23:59:39.995006 sshd[4566]: Accepted publickey for core from 4.175.71.9 port 55420 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:59:39.997313 sshd-session[4566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:59:40.003850 systemd-logind[1606]: New session 26 of user core. Apr 16 23:59:40.006222 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 16 23:59:40.183268 containerd[1631]: time="2026-04-16T23:59:40.183007873Z" level=info msg="CreateContainer within sandbox \"22b06b13be1418e25c3736de0b3ade4779a0c86a4e22194f896a61f62d8f018a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 16 23:59:40.194754 containerd[1631]: time="2026-04-16T23:59:40.194707218Z" level=info msg="Container 44ac1af96bf82f05347e6692b3e2cd3db57d081f1dc47bfcdd4ac1d35db931e1: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:59:40.201023 containerd[1631]: time="2026-04-16T23:59:40.200972424Z" level=info msg="CreateContainer within sandbox \"22b06b13be1418e25c3736de0b3ade4779a0c86a4e22194f896a61f62d8f018a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"44ac1af96bf82f05347e6692b3e2cd3db57d081f1dc47bfcdd4ac1d35db931e1\"" Apr 16 23:59:40.201564 containerd[1631]: time="2026-04-16T23:59:40.201520191Z" level=info msg="StartContainer for \"44ac1af96bf82f05347e6692b3e2cd3db57d081f1dc47bfcdd4ac1d35db931e1\"" Apr 16 23:59:40.202713 containerd[1631]: time="2026-04-16T23:59:40.202636615Z" level=info msg="connecting to shim 44ac1af96bf82f05347e6692b3e2cd3db57d081f1dc47bfcdd4ac1d35db931e1" address="unix:///run/containerd/s/b48ecf05616dfb80a2e01bc7eb5de67e363c6fd89ab59dd7b4aedc87e0c50bdb" protocol=ttrpc version=3 Apr 16 23:59:40.231411 systemd[1]: Started cri-containerd-44ac1af96bf82f05347e6692b3e2cd3db57d081f1dc47bfcdd4ac1d35db931e1.scope - libcontainer container 44ac1af96bf82f05347e6692b3e2cd3db57d081f1dc47bfcdd4ac1d35db931e1. Apr 16 23:59:40.271768 containerd[1631]: time="2026-04-16T23:59:40.271670266Z" level=info msg="StartContainer for \"44ac1af96bf82f05347e6692b3e2cd3db57d081f1dc47bfcdd4ac1d35db931e1\" returns successfully" Apr 16 23:59:40.277587 systemd[1]: cri-containerd-44ac1af96bf82f05347e6692b3e2cd3db57d081f1dc47bfcdd4ac1d35db931e1.scope: Deactivated successfully. Apr 16 23:59:40.278083 containerd[1631]: time="2026-04-16T23:59:40.278066031Z" level=info msg="received container exit event container_id:\"44ac1af96bf82f05347e6692b3e2cd3db57d081f1dc47bfcdd4ac1d35db931e1\" id:\"44ac1af96bf82f05347e6692b3e2cd3db57d081f1dc47bfcdd4ac1d35db931e1\" pid:4644 exited_at:{seconds:1776383980 nanos:277664003}" Apr 16 23:59:41.190673 containerd[1631]: time="2026-04-16T23:59:41.190611638Z" level=info msg="CreateContainer within sandbox \"22b06b13be1418e25c3736de0b3ade4779a0c86a4e22194f896a61f62d8f018a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 16 23:59:41.211450 containerd[1631]: time="2026-04-16T23:59:41.211391325Z" level=info msg="Container 7e759473c0846d666a14eb2db9fd039061b2b8664bc72c01afc0d43e136ad5ae: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:59:41.225329 containerd[1631]: time="2026-04-16T23:59:41.225265091Z" level=info msg="CreateContainer within sandbox \"22b06b13be1418e25c3736de0b3ade4779a0c86a4e22194f896a61f62d8f018a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7e759473c0846d666a14eb2db9fd039061b2b8664bc72c01afc0d43e136ad5ae\"" Apr 16 23:59:41.226530 containerd[1631]: time="2026-04-16T23:59:41.226451934Z" level=info msg="StartContainer for \"7e759473c0846d666a14eb2db9fd039061b2b8664bc72c01afc0d43e136ad5ae\"" Apr 16 23:59:41.232817 containerd[1631]: time="2026-04-16T23:59:41.232736950Z" level=info msg="connecting to shim 7e759473c0846d666a14eb2db9fd039061b2b8664bc72c01afc0d43e136ad5ae" address="unix:///run/containerd/s/b48ecf05616dfb80a2e01bc7eb5de67e363c6fd89ab59dd7b4aedc87e0c50bdb" protocol=ttrpc version=3 Apr 16 23:59:41.260313 systemd[1]: Started cri-containerd-7e759473c0846d666a14eb2db9fd039061b2b8664bc72c01afc0d43e136ad5ae.scope - libcontainer container 7e759473c0846d666a14eb2db9fd039061b2b8664bc72c01afc0d43e136ad5ae. Apr 16 23:59:41.327437 containerd[1631]: time="2026-04-16T23:59:41.327314390Z" level=info msg="StartContainer for \"7e759473c0846d666a14eb2db9fd039061b2b8664bc72c01afc0d43e136ad5ae\" returns successfully" Apr 16 23:59:41.329707 systemd[1]: cri-containerd-7e759473c0846d666a14eb2db9fd039061b2b8664bc72c01afc0d43e136ad5ae.scope: Deactivated successfully. Apr 16 23:59:41.331622 containerd[1631]: time="2026-04-16T23:59:41.331589377Z" level=info msg="received container exit event container_id:\"7e759473c0846d666a14eb2db9fd039061b2b8664bc72c01afc0d43e136ad5ae\" id:\"7e759473c0846d666a14eb2db9fd039061b2b8664bc72c01afc0d43e136ad5ae\" pid:4688 exited_at:{seconds:1776383981 nanos:331392658}" Apr 16 23:59:41.351404 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e759473c0846d666a14eb2db9fd039061b2b8664bc72c01afc0d43e136ad5ae-rootfs.mount: Deactivated successfully. Apr 16 23:59:42.196654 containerd[1631]: time="2026-04-16T23:59:42.196543070Z" level=info msg="CreateContainer within sandbox \"22b06b13be1418e25c3736de0b3ade4779a0c86a4e22194f896a61f62d8f018a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 16 23:59:42.216370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount256903398.mount: Deactivated successfully. Apr 16 23:59:42.222330 containerd[1631]: time="2026-04-16T23:59:42.220309244Z" level=info msg="Container b2a072ec2798af4dfaee7eb15618c935909bf76f9e4112dfb175ba65cecc4edb: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:59:42.233332 containerd[1631]: time="2026-04-16T23:59:42.233294475Z" level=info msg="CreateContainer within sandbox \"22b06b13be1418e25c3736de0b3ade4779a0c86a4e22194f896a61f62d8f018a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b2a072ec2798af4dfaee7eb15618c935909bf76f9e4112dfb175ba65cecc4edb\"" Apr 16 23:59:42.233906 containerd[1631]: time="2026-04-16T23:59:42.233873352Z" level=info msg="StartContainer for \"b2a072ec2798af4dfaee7eb15618c935909bf76f9e4112dfb175ba65cecc4edb\"" Apr 16 23:59:42.234502 containerd[1631]: time="2026-04-16T23:59:42.234476189Z" level=info msg="connecting to shim b2a072ec2798af4dfaee7eb15618c935909bf76f9e4112dfb175ba65cecc4edb" address="unix:///run/containerd/s/b48ecf05616dfb80a2e01bc7eb5de67e363c6fd89ab59dd7b4aedc87e0c50bdb" protocol=ttrpc version=3 Apr 16 23:59:42.265210 systemd[1]: Started cri-containerd-b2a072ec2798af4dfaee7eb15618c935909bf76f9e4112dfb175ba65cecc4edb.scope - libcontainer container b2a072ec2798af4dfaee7eb15618c935909bf76f9e4112dfb175ba65cecc4edb. Apr 16 23:59:42.297964 systemd[1]: cri-containerd-b2a072ec2798af4dfaee7eb15618c935909bf76f9e4112dfb175ba65cecc4edb.scope: Deactivated successfully. Apr 16 23:59:42.301043 containerd[1631]: time="2026-04-16T23:59:42.301002547Z" level=info msg="received container exit event container_id:\"b2a072ec2798af4dfaee7eb15618c935909bf76f9e4112dfb175ba65cecc4edb\" id:\"b2a072ec2798af4dfaee7eb15618c935909bf76f9e4112dfb175ba65cecc4edb\" pid:4728 exited_at:{seconds:1776383982 nanos:299466915}" Apr 16 23:59:42.302321 containerd[1631]: time="2026-04-16T23:59:42.302266130Z" level=info msg="StartContainer for \"b2a072ec2798af4dfaee7eb15618c935909bf76f9e4112dfb175ba65cecc4edb\" returns successfully" Apr 16 23:59:42.921605 kubelet[2810]: E0416 23:59:42.921531 2810 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 23:59:43.198445 containerd[1631]: time="2026-04-16T23:59:43.198399810Z" level=info msg="CreateContainer within sandbox \"22b06b13be1418e25c3736de0b3ade4779a0c86a4e22194f896a61f62d8f018a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 16 23:59:43.209170 containerd[1631]: time="2026-04-16T23:59:43.209142714Z" level=info msg="Container 6c33b4dba008cdc69a685737a7d9193d88a8681cc99aca1819e0954611b937a2: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:59:43.213228 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b2a072ec2798af4dfaee7eb15618c935909bf76f9e4112dfb175ba65cecc4edb-rootfs.mount: Deactivated successfully. Apr 16 23:59:43.218449 containerd[1631]: time="2026-04-16T23:59:43.218428746Z" level=info msg="CreateContainer within sandbox \"22b06b13be1418e25c3736de0b3ade4779a0c86a4e22194f896a61f62d8f018a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6c33b4dba008cdc69a685737a7d9193d88a8681cc99aca1819e0954611b937a2\"" Apr 16 23:59:43.218969 containerd[1631]: time="2026-04-16T23:59:43.218951413Z" level=info msg="StartContainer for \"6c33b4dba008cdc69a685737a7d9193d88a8681cc99aca1819e0954611b937a2\"" Apr 16 23:59:43.219770 containerd[1631]: time="2026-04-16T23:59:43.219732989Z" level=info msg="connecting to shim 6c33b4dba008cdc69a685737a7d9193d88a8681cc99aca1819e0954611b937a2" address="unix:///run/containerd/s/b48ecf05616dfb80a2e01bc7eb5de67e363c6fd89ab59dd7b4aedc87e0c50bdb" protocol=ttrpc version=3 Apr 16 23:59:43.240225 systemd[1]: Started cri-containerd-6c33b4dba008cdc69a685737a7d9193d88a8681cc99aca1819e0954611b937a2.scope - libcontainer container 6c33b4dba008cdc69a685737a7d9193d88a8681cc99aca1819e0954611b937a2. Apr 16 23:59:43.278482 containerd[1631]: time="2026-04-16T23:59:43.278436113Z" level=info msg="StartContainer for \"6c33b4dba008cdc69a685737a7d9193d88a8681cc99aca1819e0954611b937a2\" returns successfully" Apr 16 23:59:43.628138 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-vaes-avx10_512)) Apr 16 23:59:44.225900 kubelet[2810]: I0416 23:59:44.225757 2810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-95mdd" podStartSLOduration=5.22569444 podStartE2EDuration="5.22569444s" podCreationTimestamp="2026-04-16 23:59:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:59:44.223081843 +0000 UTC m=+116.465456518" watchObservedRunningTime="2026-04-16 23:59:44.22569444 +0000 UTC m=+116.468069085" Apr 16 23:59:46.372100 systemd-networkd[1489]: lxc_health: Link UP Apr 16 23:59:46.383733 systemd-networkd[1489]: lxc_health: Gained carrier Apr 16 23:59:47.573462 systemd-networkd[1489]: lxc_health: Gained IPv6LL Apr 16 23:59:47.840857 containerd[1631]: time="2026-04-16T23:59:47.840731438Z" level=info msg="StopPodSandbox for \"cad2566d1dabb86801d5361fa53d42c485141d0c1dedff3047a278f1fbf87ada\"" Apr 16 23:59:47.843187 containerd[1631]: time="2026-04-16T23:59:47.841400665Z" level=info msg="TearDown network for sandbox \"cad2566d1dabb86801d5361fa53d42c485141d0c1dedff3047a278f1fbf87ada\" successfully" Apr 16 23:59:47.843187 containerd[1631]: time="2026-04-16T23:59:47.841412164Z" level=info msg="StopPodSandbox for \"cad2566d1dabb86801d5361fa53d42c485141d0c1dedff3047a278f1fbf87ada\" returns successfully" Apr 16 23:59:47.843187 containerd[1631]: time="2026-04-16T23:59:47.842463509Z" level=info msg="RemovePodSandbox for \"cad2566d1dabb86801d5361fa53d42c485141d0c1dedff3047a278f1fbf87ada\"" Apr 16 23:59:47.843187 containerd[1631]: time="2026-04-16T23:59:47.842477659Z" level=info msg="Forcibly stopping sandbox \"cad2566d1dabb86801d5361fa53d42c485141d0c1dedff3047a278f1fbf87ada\"" Apr 16 23:59:47.843187 containerd[1631]: time="2026-04-16T23:59:47.842519329Z" level=info msg="TearDown network for sandbox \"cad2566d1dabb86801d5361fa53d42c485141d0c1dedff3047a278f1fbf87ada\" successfully" Apr 16 23:59:47.843944 containerd[1631]: time="2026-04-16T23:59:47.843918922Z" level=info msg="Ensure that sandbox cad2566d1dabb86801d5361fa53d42c485141d0c1dedff3047a278f1fbf87ada in task-service has been cleanup successfully" Apr 16 23:59:47.848687 containerd[1631]: time="2026-04-16T23:59:47.848672749Z" level=info msg="RemovePodSandbox \"cad2566d1dabb86801d5361fa53d42c485141d0c1dedff3047a278f1fbf87ada\" returns successfully" Apr 16 23:59:47.848994 containerd[1631]: time="2026-04-16T23:59:47.848980608Z" level=info msg="StopPodSandbox for \"71a040ad5a56a6f893437f67b5ec47090ddcf52a435e388c7a47962efa3753af\"" Apr 16 23:59:47.849096 containerd[1631]: time="2026-04-16T23:59:47.849085267Z" level=info msg="TearDown network for sandbox \"71a040ad5a56a6f893437f67b5ec47090ddcf52a435e388c7a47962efa3753af\" successfully" Apr 16 23:59:47.849147 containerd[1631]: time="2026-04-16T23:59:47.849139667Z" level=info msg="StopPodSandbox for \"71a040ad5a56a6f893437f67b5ec47090ddcf52a435e388c7a47962efa3753af\" returns successfully" Apr 16 23:59:47.849404 containerd[1631]: time="2026-04-16T23:59:47.849392446Z" level=info msg="RemovePodSandbox for \"71a040ad5a56a6f893437f67b5ec47090ddcf52a435e388c7a47962efa3753af\"" Apr 16 23:59:47.849454 containerd[1631]: time="2026-04-16T23:59:47.849446315Z" level=info msg="Forcibly stopping sandbox \"71a040ad5a56a6f893437f67b5ec47090ddcf52a435e388c7a47962efa3753af\"" Apr 16 23:59:47.849515 containerd[1631]: time="2026-04-16T23:59:47.849507895Z" level=info msg="TearDown network for sandbox \"71a040ad5a56a6f893437f67b5ec47090ddcf52a435e388c7a47962efa3753af\" successfully" Apr 16 23:59:47.850657 containerd[1631]: time="2026-04-16T23:59:47.850544250Z" level=info msg="Ensure that sandbox 71a040ad5a56a6f893437f67b5ec47090ddcf52a435e388c7a47962efa3753af in task-service has been cleanup successfully" Apr 16 23:59:47.853416 containerd[1631]: time="2026-04-16T23:59:47.853402176Z" level=info msg="RemovePodSandbox \"71a040ad5a56a6f893437f67b5ec47090ddcf52a435e388c7a47962efa3753af\" returns successfully" Apr 16 23:59:52.736289 sshd[4624]: Connection closed by 4.175.71.9 port 55420 Apr 16 23:59:52.737395 sshd-session[4566]: pam_unix(sshd:session): session closed for user core Apr 16 23:59:52.743229 systemd[1]: sshd@25-77.42.22.14:22-4.175.71.9:55420.service: Deactivated successfully. Apr 16 23:59:52.747108 systemd[1]: session-26.scope: Deactivated successfully. Apr 16 23:59:52.749713 systemd-logind[1606]: Session 26 logged out. Waiting for processes to exit. Apr 16 23:59:52.752909 systemd-logind[1606]: Removed session 26. Apr 17 00:00:09.367434 systemd[1]: cri-containerd-c2813e8c39716c82c86835a0b9cbf77da7215e46d0e519227c636e6f4c3c5b53.scope: Deactivated successfully. Apr 17 00:00:09.367717 systemd[1]: cri-containerd-c2813e8c39716c82c86835a0b9cbf77da7215e46d0e519227c636e6f4c3c5b53.scope: Consumed 2.196s CPU time, 57.1M memory peak. Apr 17 00:00:09.369433 containerd[1631]: time="2026-04-17T00:00:09.369403776Z" level=info msg="received container exit event container_id:\"c2813e8c39716c82c86835a0b9cbf77da7215e46d0e519227c636e6f4c3c5b53\" id:\"c2813e8c39716c82c86835a0b9cbf77da7215e46d0e519227c636e6f4c3c5b53\" pid:2638 exit_status:1 exited_at:{seconds:1776384009 nanos:369107577}" Apr 17 00:00:09.389430 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c2813e8c39716c82c86835a0b9cbf77da7215e46d0e519227c636e6f4c3c5b53-rootfs.mount: Deactivated successfully. Apr 17 00:00:09.784447 kubelet[2810]: E0417 00:00:09.784208 2810 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:50224->10.0.0.2:2379: read: connection timed out" Apr 17 00:00:10.264371 kubelet[2810]: I0417 00:00:10.263837 2810 scope.go:117] "RemoveContainer" containerID="c2813e8c39716c82c86835a0b9cbf77da7215e46d0e519227c636e6f4c3c5b53" Apr 17 00:00:10.266376 containerd[1631]: time="2026-04-17T00:00:10.266307186Z" level=info msg="CreateContainer within sandbox \"4cb2f1bd8f5f5329406e48bc8fab202d1d8d2c92d56eeeb1b944070138069c91\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 17 00:00:10.280170 containerd[1631]: time="2026-04-17T00:00:10.279406730Z" level=info msg="Container 0d28a606be1846057e28b03e6ee66b832343ba887ea47b375622c0c9d0d5ea5b: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:00:10.293534 containerd[1631]: time="2026-04-17T00:00:10.293482510Z" level=info msg="CreateContainer within sandbox \"4cb2f1bd8f5f5329406e48bc8fab202d1d8d2c92d56eeeb1b944070138069c91\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"0d28a606be1846057e28b03e6ee66b832343ba887ea47b375622c0c9d0d5ea5b\"" Apr 17 00:00:10.294028 containerd[1631]: time="2026-04-17T00:00:10.293989908Z" level=info msg="StartContainer for \"0d28a606be1846057e28b03e6ee66b832343ba887ea47b375622c0c9d0d5ea5b\"" Apr 17 00:00:10.295498 containerd[1631]: time="2026-04-17T00:00:10.295456453Z" level=info msg="connecting to shim 0d28a606be1846057e28b03e6ee66b832343ba887ea47b375622c0c9d0d5ea5b" address="unix:///run/containerd/s/b13b3d2cf5dcddcd1dfbe12c11dd037abf2fff376a96b247997f5dac556ff8e4" protocol=ttrpc version=3 Apr 17 00:00:10.331319 systemd[1]: Started cri-containerd-0d28a606be1846057e28b03e6ee66b832343ba887ea47b375622c0c9d0d5ea5b.scope - libcontainer container 0d28a606be1846057e28b03e6ee66b832343ba887ea47b375622c0c9d0d5ea5b. Apr 17 00:00:10.389948 containerd[1631]: time="2026-04-17T00:00:10.389883746Z" level=info msg="StartContainer for \"0d28a606be1846057e28b03e6ee66b832343ba887ea47b375622c0c9d0d5ea5b\" returns successfully" Apr 17 00:00:14.877386 systemd[1]: cri-containerd-07d476e7b77515fbaba8754b02538963aaaf475c80d27f37346d2fefeeaacefa.scope: Deactivated successfully. Apr 17 00:00:14.878905 systemd[1]: cri-containerd-07d476e7b77515fbaba8754b02538963aaaf475c80d27f37346d2fefeeaacefa.scope: Consumed 1.280s CPU time, 22M memory peak. Apr 17 00:00:14.881353 containerd[1631]: time="2026-04-17T00:00:14.881206858Z" level=info msg="received container exit event container_id:\"07d476e7b77515fbaba8754b02538963aaaf475c80d27f37346d2fefeeaacefa\" id:\"07d476e7b77515fbaba8754b02538963aaaf475c80d27f37346d2fefeeaacefa\" pid:2664 exit_status:1 exited_at:{seconds:1776384014 nanos:879883562}" Apr 17 00:00:14.924253 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-07d476e7b77515fbaba8754b02538963aaaf475c80d27f37346d2fefeeaacefa-rootfs.mount: Deactivated successfully. Apr 17 00:00:15.031863 kubelet[2810]: E0417 00:00:15.031645 2810 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:50024->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4459-2-4-n-3f94367fd3.18a6fbd72a45512f kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4459-2-4-n-3f94367fd3,UID:9514f271368628a7f40a1c49d6262c57,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4459-2-4-n-3f94367fd3,},FirstTimestamp:2026-04-17 00:00:04.589965615 +0000 UTC m=+136.832340260,LastTimestamp:2026-04-17 00:00:04.589965615 +0000 UTC m=+136.832340260,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-4-n-3f94367fd3,}" Apr 17 00:00:15.280587 kubelet[2810]: I0417 00:00:15.280538 2810 scope.go:117] "RemoveContainer" containerID="07d476e7b77515fbaba8754b02538963aaaf475c80d27f37346d2fefeeaacefa" Apr 17 00:00:15.283278 containerd[1631]: time="2026-04-17T00:00:15.283228073Z" level=info msg="CreateContainer within sandbox \"05b1fce90064c4523e4b24ce4756974667b9f912783baa7fc61d2daf8bf02e3e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 17 00:00:15.294453 containerd[1631]: time="2026-04-17T00:00:15.294403596Z" level=info msg="Container 8f375333b673279616a576bd180bbeb603ce16b515518205fdb05248d7975ef7: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:00:15.307016 containerd[1631]: time="2026-04-17T00:00:15.306892503Z" level=info msg="CreateContainer within sandbox \"05b1fce90064c4523e4b24ce4756974667b9f912783baa7fc61d2daf8bf02e3e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"8f375333b673279616a576bd180bbeb603ce16b515518205fdb05248d7975ef7\"" Apr 17 00:00:15.307796 containerd[1631]: time="2026-04-17T00:00:15.307750641Z" level=info msg="StartContainer for \"8f375333b673279616a576bd180bbeb603ce16b515518205fdb05248d7975ef7\"" Apr 17 00:00:15.309501 containerd[1631]: time="2026-04-17T00:00:15.309441685Z" level=info msg="connecting to shim 8f375333b673279616a576bd180bbeb603ce16b515518205fdb05248d7975ef7" address="unix:///run/containerd/s/8e82e3a1a7ce8e420653345306ee2b3eda7ecb76fa00e72e998b1661ce875aa0" protocol=ttrpc version=3 Apr 17 00:00:15.343226 systemd[1]: Started cri-containerd-8f375333b673279616a576bd180bbeb603ce16b515518205fdb05248d7975ef7.scope - libcontainer container 8f375333b673279616a576bd180bbeb603ce16b515518205fdb05248d7975ef7. Apr 17 00:00:15.399855 containerd[1631]: time="2026-04-17T00:00:15.399782039Z" level=info msg="StartContainer for \"8f375333b673279616a576bd180bbeb603ce16b515518205fdb05248d7975ef7\" returns successfully" Apr 17 00:00:19.785775 kubelet[2810]: E0417 00:00:19.785288 2810 controller.go:195] "Failed to update lease" err="Put \"https://77.42.22.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-3f94367fd3?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 17 00:00:20.728558 kubelet[2810]: I0417 00:00:20.728480 2810 status_manager.go:895] "Failed to get status for pod" podUID="489920b315fb52d002027ba533ed98f2" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-3f94367fd3" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:50144->10.0.0.2:2379: read: connection timed out" Apr 17 00:00:29.786493 kubelet[2810]: E0417 00:00:29.785867 2810 controller.go:195] "Failed to update lease" err="Put \"https://77.42.22.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-3f94367fd3?timeout=10s\": context deadline exceeded"