Apr 23 23:13:26.789888 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 23 23:13:26.789911 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Thu Apr 23 21:57:58 -00 2026 Apr 23 23:13:26.789921 kernel: KASLR enabled Apr 23 23:13:26.789927 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Apr 23 23:13:26.789933 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390b8118 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Apr 23 23:13:26.789939 kernel: random: crng init done Apr 23 23:13:26.789946 kernel: secureboot: Secure boot disabled Apr 23 23:13:26.789951 kernel: ACPI: Early table checksum verification disabled Apr 23 23:13:26.789957 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Apr 23 23:13:26.789964 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Apr 23 23:13:26.789971 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Apr 23 23:13:26.789977 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 23 23:13:26.789983 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Apr 23 23:13:26.789989 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 23 23:13:26.789997 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 23 23:13:26.790005 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 23 23:13:26.790011 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 23 23:13:26.790018 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Apr 23 23:13:26.790024 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 23 23:13:26.790030 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Apr 23 23:13:26.790037 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Apr 23 23:13:26.790043 kernel: ACPI: Use ACPI SPCR as default console: Yes Apr 23 23:13:26.790049 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Apr 23 23:13:26.790056 kernel: NODE_DATA(0) allocated [mem 0x13967da00-0x139684fff] Apr 23 23:13:26.790062 kernel: Zone ranges: Apr 23 23:13:26.790068 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 23 23:13:26.790075 kernel: DMA32 empty Apr 23 23:13:26.790082 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Apr 23 23:13:26.790088 kernel: Device empty Apr 23 23:13:26.790094 kernel: Movable zone start for each node Apr 23 23:13:26.790101 kernel: Early memory node ranges Apr 23 23:13:26.790107 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Apr 23 23:13:26.790113 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Apr 23 23:13:26.790120 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Apr 23 23:13:26.790126 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Apr 23 23:13:26.790132 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Apr 23 23:13:26.790138 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Apr 23 23:13:26.790145 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Apr 23 23:13:26.790152 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Apr 23 23:13:26.790159 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Apr 23 23:13:26.790168 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Apr 23 23:13:26.790174 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Apr 23 23:13:26.790181 kernel: cma: Reserved 16 MiB at 0x00000000ff000000 on node -1 Apr 23 23:13:26.790189 kernel: psci: probing for conduit method from ACPI. Apr 23 23:13:26.790196 kernel: psci: PSCIv1.1 detected in firmware. Apr 23 23:13:26.790202 kernel: psci: Using standard PSCI v0.2 function IDs Apr 23 23:13:26.790209 kernel: psci: Trusted OS migration not required Apr 23 23:13:26.790216 kernel: psci: SMC Calling Convention v1.1 Apr 23 23:13:26.790223 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Apr 23 23:13:26.790230 kernel: percpu: Embedded 33 pages/cpu s97752 r8192 d29224 u135168 Apr 23 23:13:26.790236 kernel: pcpu-alloc: s97752 r8192 d29224 u135168 alloc=33*4096 Apr 23 23:13:26.790243 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 23 23:13:26.790250 kernel: Detected PIPT I-cache on CPU0 Apr 23 23:13:26.790257 kernel: CPU features: detected: GIC system register CPU interface Apr 23 23:13:26.790264 kernel: CPU features: detected: Spectre-v4 Apr 23 23:13:26.790271 kernel: CPU features: detected: Spectre-BHB Apr 23 23:13:26.790311 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 23 23:13:26.790319 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 23 23:13:26.790326 kernel: CPU features: detected: ARM erratum 1418040 Apr 23 23:13:26.790333 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 23 23:13:26.790340 kernel: alternatives: applying boot alternatives Apr 23 23:13:26.790348 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=8669c84e6bfac0c003f3ced682d9b5c0fda27fc2948639441be65941607b4c3d Apr 23 23:13:26.790355 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 23 23:13:26.790361 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 23 23:13:26.790368 kernel: Fallback order for Node 0: 0 Apr 23 23:13:26.790377 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1024000 Apr 23 23:13:26.790384 kernel: Policy zone: Normal Apr 23 23:13:26.790391 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 23 23:13:26.790398 kernel: software IO TLB: area num 2. Apr 23 23:13:26.790404 kernel: software IO TLB: mapped [mem 0x00000000fb000000-0x00000000ff000000] (64MB) Apr 23 23:13:26.790411 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 23 23:13:26.790418 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 23 23:13:26.790425 kernel: rcu: RCU event tracing is enabled. Apr 23 23:13:26.790432 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 23 23:13:26.790439 kernel: Trampoline variant of Tasks RCU enabled. Apr 23 23:13:26.790446 kernel: Tracing variant of Tasks RCU enabled. Apr 23 23:13:26.790453 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 23 23:13:26.790461 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 23 23:13:26.790468 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 23 23:13:26.790475 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 23 23:13:26.790481 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 23 23:13:26.790488 kernel: GICv3: 256 SPIs implemented Apr 23 23:13:26.790495 kernel: GICv3: 0 Extended SPIs implemented Apr 23 23:13:26.790502 kernel: Root IRQ handler: gic_handle_irq Apr 23 23:13:26.790509 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Apr 23 23:13:26.790515 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Apr 23 23:13:26.790522 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Apr 23 23:13:26.790529 kernel: ITS [mem 0x08080000-0x0809ffff] Apr 23 23:13:26.790537 kernel: ITS@0x0000000008080000: allocated 8192 Devices @100100000 (indirect, esz 8, psz 64K, shr 1) Apr 23 23:13:26.790544 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @100110000 (flat, esz 8, psz 64K, shr 1) Apr 23 23:13:26.790551 kernel: GICv3: using LPI property table @0x0000000100120000 Apr 23 23:13:26.790558 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000100130000 Apr 23 23:13:26.790565 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 23 23:13:26.790571 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 23 23:13:26.790578 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 23 23:13:26.790585 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 23 23:13:26.790592 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 23 23:13:26.790599 kernel: Console: colour dummy device 80x25 Apr 23 23:13:26.790606 kernel: ACPI: Core revision 20240827 Apr 23 23:13:26.790614 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 23 23:13:26.790621 kernel: pid_max: default: 32768 minimum: 301 Apr 23 23:13:26.790628 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 23 23:13:26.790635 kernel: landlock: Up and running. Apr 23 23:13:26.790642 kernel: SELinux: Initializing. Apr 23 23:13:26.790649 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 23 23:13:26.790656 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 23 23:13:26.790663 kernel: rcu: Hierarchical SRCU implementation. Apr 23 23:13:26.790670 kernel: rcu: Max phase no-delay instances is 400. Apr 23 23:13:26.790679 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 23 23:13:26.790686 kernel: Remapping and enabling EFI services. Apr 23 23:13:26.790693 kernel: smp: Bringing up secondary CPUs ... Apr 23 23:13:26.790699 kernel: Detected PIPT I-cache on CPU1 Apr 23 23:13:26.790723 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Apr 23 23:13:26.790730 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100140000 Apr 23 23:13:26.790737 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 23 23:13:26.790744 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 23 23:13:26.790751 kernel: smp: Brought up 1 node, 2 CPUs Apr 23 23:13:26.790758 kernel: SMP: Total of 2 processors activated. Apr 23 23:13:26.790772 kernel: CPU: All CPU(s) started at EL1 Apr 23 23:13:26.790780 kernel: CPU features: detected: 32-bit EL0 Support Apr 23 23:13:26.790788 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 23 23:13:26.790796 kernel: CPU features: detected: Common not Private translations Apr 23 23:13:26.790803 kernel: CPU features: detected: CRC32 instructions Apr 23 23:13:26.790811 kernel: CPU features: detected: Enhanced Virtualization Traps Apr 23 23:13:26.790818 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 23 23:13:26.790826 kernel: CPU features: detected: LSE atomic instructions Apr 23 23:13:26.790834 kernel: CPU features: detected: Privileged Access Never Apr 23 23:13:26.790841 kernel: CPU features: detected: RAS Extension Support Apr 23 23:13:26.790848 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Apr 23 23:13:26.790856 kernel: alternatives: applying system-wide alternatives Apr 23 23:13:26.790863 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Apr 23 23:13:26.790871 kernel: Memory: 3858780K/4096000K available (11200K kernel code, 2458K rwdata, 9092K rodata, 39552K init, 1038K bss, 215732K reserved, 16384K cma-reserved) Apr 23 23:13:26.790878 kernel: devtmpfs: initialized Apr 23 23:13:26.790886 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 23 23:13:26.790895 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 23 23:13:26.790902 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 23 23:13:26.790909 kernel: 0 pages in range for non-PLT usage Apr 23 23:13:26.790917 kernel: 508384 pages in range for PLT usage Apr 23 23:13:26.790924 kernel: pinctrl core: initialized pinctrl subsystem Apr 23 23:13:26.790931 kernel: SMBIOS 3.0.0 present. Apr 23 23:13:26.790938 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Apr 23 23:13:26.790946 kernel: DMI: Memory slots populated: 1/1 Apr 23 23:13:26.790953 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 23 23:13:26.790962 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 23 23:13:26.790969 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 23 23:13:26.790977 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 23 23:13:26.790984 kernel: audit: initializing netlink subsys (disabled) Apr 23 23:13:26.790991 kernel: audit: type=2000 audit(0.013:1): state=initialized audit_enabled=0 res=1 Apr 23 23:13:26.790999 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 23 23:13:26.791006 kernel: cpuidle: using governor menu Apr 23 23:13:26.791014 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 23 23:13:26.791021 kernel: ASID allocator initialised with 32768 entries Apr 23 23:13:26.791029 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 23 23:13:26.791037 kernel: Serial: AMBA PL011 UART driver Apr 23 23:13:26.791044 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 23 23:13:26.791052 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 23 23:13:26.791059 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 23 23:13:26.791066 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 23 23:13:26.791074 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 23 23:13:26.791081 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 23 23:13:26.791088 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 23 23:13:26.791097 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 23 23:13:26.791104 kernel: ACPI: Added _OSI(Module Device) Apr 23 23:13:26.791111 kernel: ACPI: Added _OSI(Processor Device) Apr 23 23:13:26.791119 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 23 23:13:26.791126 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 23 23:13:26.791133 kernel: ACPI: Interpreter enabled Apr 23 23:13:26.791140 kernel: ACPI: Using GIC for interrupt routing Apr 23 23:13:26.791148 kernel: ACPI: MCFG table detected, 1 entries Apr 23 23:13:26.791155 kernel: ACPI: CPU0 has been hot-added Apr 23 23:13:26.791162 kernel: ACPI: CPU1 has been hot-added Apr 23 23:13:26.791171 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Apr 23 23:13:26.791178 kernel: printk: legacy console [ttyAMA0] enabled Apr 23 23:13:26.791186 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 23 23:13:26.791334 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 23 23:13:26.791404 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 23 23:13:26.791466 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 23 23:13:26.791527 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Apr 23 23:13:26.791590 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Apr 23 23:13:26.791599 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Apr 23 23:13:26.791607 kernel: PCI host bridge to bus 0000:00 Apr 23 23:13:26.791678 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Apr 23 23:13:26.792171 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 23 23:13:26.792241 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Apr 23 23:13:26.792346 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 23 23:13:26.792442 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Apr 23 23:13:26.792518 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 conventional PCI endpoint Apr 23 23:13:26.792582 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11289000-0x11289fff] Apr 23 23:13:26.792645 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref] Apr 23 23:13:26.792743 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 23 23:13:26.792813 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11288000-0x11288fff] Apr 23 23:13:26.792880 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 23 23:13:26.792943 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff] Apr 23 23:13:26.793004 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80000fffff 64bit pref] Apr 23 23:13:26.793072 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 23 23:13:26.793135 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11287000-0x11287fff] Apr 23 23:13:26.793197 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 23 23:13:26.793259 kernel: pci 0000:00:02.1: bridge window [mem 0x10e00000-0x10ffffff] Apr 23 23:13:26.793346 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 23 23:13:26.793413 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11286000-0x11286fff] Apr 23 23:13:26.793474 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 23 23:13:26.793537 kernel: pci 0000:00:02.2: bridge window [mem 0x10c00000-0x10dfffff] Apr 23 23:13:26.793599 kernel: pci 0000:00:02.2: bridge window [mem 0x8000100000-0x80001fffff 64bit pref] Apr 23 23:13:26.793669 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 23 23:13:26.793789 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11285000-0x11285fff] Apr 23 23:13:26.793864 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 23 23:13:26.793927 kernel: pci 0000:00:02.3: bridge window [mem 0x10a00000-0x10bfffff] Apr 23 23:13:26.793989 kernel: pci 0000:00:02.3: bridge window [mem 0x8000200000-0x80002fffff 64bit pref] Apr 23 23:13:26.794624 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 23 23:13:26.794728 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11284000-0x11284fff] Apr 23 23:13:26.794805 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 23 23:13:26.794868 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Apr 23 23:13:26.794937 kernel: pci 0000:00:02.4: bridge window [mem 0x8000300000-0x80003fffff 64bit pref] Apr 23 23:13:26.795012 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 23 23:13:26.795075 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11283000-0x11283fff] Apr 23 23:13:26.795137 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 23 23:13:26.795198 kernel: pci 0000:00:02.5: bridge window [mem 0x10600000-0x107fffff] Apr 23 23:13:26.795259 kernel: pci 0000:00:02.5: bridge window [mem 0x8000400000-0x80004fffff 64bit pref] Apr 23 23:13:26.795376 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 23 23:13:26.795449 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11282000-0x11282fff] Apr 23 23:13:26.795512 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 23 23:13:26.795587 kernel: pci 0000:00:02.6: bridge window [mem 0x10400000-0x105fffff] Apr 23 23:13:26.795660 kernel: pci 0000:00:02.6: bridge window [mem 0x8000500000-0x80005fffff 64bit pref] Apr 23 23:13:26.795801 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 23 23:13:26.795882 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11281000-0x11281fff] Apr 23 23:13:26.795956 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 23 23:13:26.796017 kernel: pci 0000:00:02.7: bridge window [mem 0x10200000-0x103fffff] Apr 23 23:13:26.796085 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 23 23:13:26.796148 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11280000-0x11280fff] Apr 23 23:13:26.796209 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 23 23:13:26.796270 kernel: pci 0000:00:03.0: bridge window [mem 0x10000000-0x101fffff] Apr 23 23:13:26.796364 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 conventional PCI endpoint Apr 23 23:13:26.796433 kernel: pci 0000:00:04.0: BAR 0 [io 0x0000-0x0007] Apr 23 23:13:26.796505 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Apr 23 23:13:26.796571 kernel: pci 0000:01:00.0: BAR 1 [mem 0x11000000-0x11000fff] Apr 23 23:13:26.796635 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Apr 23 23:13:26.796700 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref] Apr 23 23:13:26.797890 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Apr 23 23:13:26.797961 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10e00000-0x10e03fff 64bit] Apr 23 23:13:26.798041 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 PCIe Endpoint Apr 23 23:13:26.798107 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10c00000-0x10c00fff] Apr 23 23:13:26.798172 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000100000-0x8000103fff 64bit pref] Apr 23 23:13:26.798249 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Apr 23 23:13:26.798363 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000200000-0x8000203fff 64bit pref] Apr 23 23:13:26.798440 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Apr 23 23:13:26.798513 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff] Apr 23 23:13:26.798577 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000300000-0x8000303fff 64bit pref] Apr 23 23:13:26.798653 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 PCIe Endpoint Apr 23 23:13:26.799376 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10600000-0x10600fff] Apr 23 23:13:26.799459 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref] Apr 23 23:13:26.799535 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Apr 23 23:13:26.799601 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10400000-0x10400fff] Apr 23 23:13:26.799671 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000500000-0x8000503fff 64bit pref] Apr 23 23:13:26.800081 kernel: pci 0000:07:00.0: ROM [mem 0xfff80000-0xffffffff pref] Apr 23 23:13:26.800159 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Apr 23 23:13:26.800223 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Apr 23 23:13:26.800338 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Apr 23 23:13:26.800417 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Apr 23 23:13:26.800482 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Apr 23 23:13:26.800551 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Apr 23 23:13:26.800618 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Apr 23 23:13:26.800680 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Apr 23 23:13:26.800765 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Apr 23 23:13:26.800833 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Apr 23 23:13:26.800895 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Apr 23 23:13:26.800956 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Apr 23 23:13:26.801022 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Apr 23 23:13:26.801084 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Apr 23 23:13:26.801171 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Apr 23 23:13:26.801265 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Apr 23 23:13:26.801352 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Apr 23 23:13:26.801416 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Apr 23 23:13:26.801481 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 23 23:13:26.801547 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Apr 23 23:13:26.801608 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Apr 23 23:13:26.801673 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 23 23:13:26.803247 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Apr 23 23:13:26.803390 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Apr 23 23:13:26.803479 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 23 23:13:26.803558 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Apr 23 23:13:26.803633 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Apr 23 23:13:26.803715 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff]: assigned Apr 23 23:13:26.803783 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref]: assigned Apr 23 23:13:26.803852 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff]: assigned Apr 23 23:13:26.803914 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref]: assigned Apr 23 23:13:26.803981 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff]: assigned Apr 23 23:13:26.804043 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref]: assigned Apr 23 23:13:26.804111 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff]: assigned Apr 23 23:13:26.804174 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref]: assigned Apr 23 23:13:26.804238 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff]: assigned Apr 23 23:13:26.804317 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref]: assigned Apr 23 23:13:26.804384 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff]: assigned Apr 23 23:13:26.804447 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref]: assigned Apr 23 23:13:26.804511 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff]: assigned Apr 23 23:13:26.804576 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref]: assigned Apr 23 23:13:26.804638 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff]: assigned Apr 23 23:13:26.806838 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref]: assigned Apr 23 23:13:26.806946 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff]: assigned Apr 23 23:13:26.807041 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref]: assigned Apr 23 23:13:26.807119 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8001200000-0x8001203fff 64bit pref]: assigned Apr 23 23:13:26.807197 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11200000-0x11200fff]: assigned Apr 23 23:13:26.807274 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11201000-0x11201fff]: assigned Apr 23 23:13:26.807399 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Apr 23 23:13:26.807477 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11202000-0x11202fff]: assigned Apr 23 23:13:26.807547 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Apr 23 23:13:26.807616 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11203000-0x11203fff]: assigned Apr 23 23:13:26.807681 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Apr 23 23:13:26.807781 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11204000-0x11204fff]: assigned Apr 23 23:13:26.807859 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Apr 23 23:13:26.807942 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11205000-0x11205fff]: assigned Apr 23 23:13:26.808015 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Apr 23 23:13:26.808090 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11206000-0x11206fff]: assigned Apr 23 23:13:26.808166 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Apr 23 23:13:26.808235 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11207000-0x11207fff]: assigned Apr 23 23:13:26.808315 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Apr 23 23:13:26.808388 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11208000-0x11208fff]: assigned Apr 23 23:13:26.808453 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Apr 23 23:13:26.808526 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11209000-0x11209fff]: assigned Apr 23 23:13:26.808606 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff]: assigned Apr 23 23:13:26.808677 kernel: pci 0000:00:04.0: BAR 0 [io 0xa000-0xa007]: assigned Apr 23 23:13:26.809757 kernel: pci 0000:01:00.0: ROM [mem 0x10000000-0x1007ffff pref]: assigned Apr 23 23:13:26.809840 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Apr 23 23:13:26.809923 kernel: pci 0000:01:00.0: BAR 1 [mem 0x10080000-0x10080fff]: assigned Apr 23 23:13:26.810006 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 23 23:13:26.810159 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Apr 23 23:13:26.810237 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Apr 23 23:13:26.810319 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Apr 23 23:13:26.810391 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10200000-0x10203fff 64bit]: assigned Apr 23 23:13:26.810456 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 23 23:13:26.810521 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Apr 23 23:13:26.810588 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Apr 23 23:13:26.810652 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Apr 23 23:13:26.810743 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref]: assigned Apr 23 23:13:26.810812 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10400000-0x10400fff]: assigned Apr 23 23:13:26.810877 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 23 23:13:26.810940 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Apr 23 23:13:26.811006 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Apr 23 23:13:26.811068 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Apr 23 23:13:26.811138 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref]: assigned Apr 23 23:13:26.811210 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 23 23:13:26.811275 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Apr 23 23:13:26.811360 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Apr 23 23:13:26.811425 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Apr 23 23:13:26.811499 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000800000-0x8000803fff 64bit pref]: assigned Apr 23 23:13:26.811563 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff]: assigned Apr 23 23:13:26.811626 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 23 23:13:26.811697 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Apr 23 23:13:26.812091 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Apr 23 23:13:26.812172 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Apr 23 23:13:26.812260 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000a00000-0x8000a03fff 64bit pref]: assigned Apr 23 23:13:26.812379 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10a00000-0x10a00fff]: assigned Apr 23 23:13:26.812446 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 23 23:13:26.812522 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Apr 23 23:13:26.812601 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Apr 23 23:13:26.812667 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 23 23:13:26.812793 kernel: pci 0000:07:00.0: ROM [mem 0x10c00000-0x10c7ffff pref]: assigned Apr 23 23:13:26.812862 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000c00000-0x8000c03fff 64bit pref]: assigned Apr 23 23:13:26.812931 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10c80000-0x10c80fff]: assigned Apr 23 23:13:26.812996 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 23 23:13:26.813061 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Apr 23 23:13:26.813123 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Apr 23 23:13:26.813219 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 23 23:13:26.813303 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 23 23:13:26.813373 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Apr 23 23:13:26.813438 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Apr 23 23:13:26.813536 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 23 23:13:26.813613 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 23 23:13:26.813676 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Apr 23 23:13:26.813763 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Apr 23 23:13:26.813834 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Apr 23 23:13:26.813910 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Apr 23 23:13:26.813988 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 23 23:13:26.814045 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Apr 23 23:13:26.814126 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Apr 23 23:13:26.814220 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Apr 23 23:13:26.814296 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Apr 23 23:13:26.814369 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Apr 23 23:13:26.814430 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Apr 23 23:13:26.814489 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Apr 23 23:13:26.814647 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Apr 23 23:13:26.814742 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Apr 23 23:13:26.814812 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Apr 23 23:13:26.814893 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Apr 23 23:13:26.814954 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Apr 23 23:13:26.815012 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Apr 23 23:13:26.815463 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Apr 23 23:13:26.815542 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Apr 23 23:13:26.815601 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Apr 23 23:13:26.815672 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Apr 23 23:13:26.815759 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Apr 23 23:13:26.815823 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 23 23:13:26.815902 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Apr 23 23:13:26.815962 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Apr 23 23:13:26.816024 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 23 23:13:26.816091 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Apr 23 23:13:26.816153 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Apr 23 23:13:26.816211 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 23 23:13:26.816288 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Apr 23 23:13:26.816354 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Apr 23 23:13:26.816417 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Apr 23 23:13:26.816428 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 23 23:13:26.816436 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 23 23:13:26.816446 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 23 23:13:26.816455 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 23 23:13:26.816463 kernel: iommu: Default domain type: Translated Apr 23 23:13:26.816471 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 23 23:13:26.816479 kernel: efivars: Registered efivars operations Apr 23 23:13:26.816490 kernel: vgaarb: loaded Apr 23 23:13:26.816499 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 23 23:13:26.816507 kernel: VFS: Disk quotas dquot_6.6.0 Apr 23 23:13:26.816515 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 23 23:13:26.816525 kernel: pnp: PnP ACPI init Apr 23 23:13:26.816602 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Apr 23 23:13:26.816616 kernel: pnp: PnP ACPI: found 1 devices Apr 23 23:13:26.816624 kernel: NET: Registered PF_INET protocol family Apr 23 23:13:26.816632 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 23 23:13:26.816640 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 23 23:13:26.816648 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 23 23:13:26.816656 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 23 23:13:26.816666 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 23 23:13:26.816674 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 23 23:13:26.816682 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 23 23:13:26.816690 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 23 23:13:26.816697 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 23 23:13:26.816792 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Apr 23 23:13:26.816805 kernel: PCI: CLS 0 bytes, default 64 Apr 23 23:13:26.816813 kernel: kvm [1]: HYP mode not available Apr 23 23:13:26.816821 kernel: Initialise system trusted keyrings Apr 23 23:13:26.816831 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 23 23:13:26.816839 kernel: Key type asymmetric registered Apr 23 23:13:26.816847 kernel: Asymmetric key parser 'x509' registered Apr 23 23:13:26.816855 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Apr 23 23:13:26.816862 kernel: io scheduler mq-deadline registered Apr 23 23:13:26.816871 kernel: io scheduler kyber registered Apr 23 23:13:26.816878 kernel: io scheduler bfq registered Apr 23 23:13:26.816887 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 23 23:13:26.816953 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Apr 23 23:13:26.817019 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Apr 23 23:13:26.817082 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 23 23:13:26.817147 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Apr 23 23:13:26.817210 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Apr 23 23:13:26.817272 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 23 23:13:26.817353 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Apr 23 23:13:26.817427 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Apr 23 23:13:26.817490 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 23 23:13:26.817565 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Apr 23 23:13:26.817628 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Apr 23 23:13:26.817691 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 23 23:13:26.817781 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Apr 23 23:13:26.817847 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Apr 23 23:13:26.817920 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 23 23:13:26.817989 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Apr 23 23:13:26.818056 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Apr 23 23:13:26.818130 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 23 23:13:26.818201 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Apr 23 23:13:26.818265 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Apr 23 23:13:26.818369 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 23 23:13:26.818435 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Apr 23 23:13:26.818498 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Apr 23 23:13:26.818560 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 23 23:13:26.818574 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Apr 23 23:13:26.818637 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Apr 23 23:13:26.818699 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Apr 23 23:13:26.818793 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 23 23:13:26.818805 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 23 23:13:26.818813 kernel: ACPI: button: Power Button [PWRB] Apr 23 23:13:26.818821 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 23 23:13:26.818888 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Apr 23 23:13:26.818956 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Apr 23 23:13:26.818972 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 23 23:13:26.818980 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 23 23:13:26.819044 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Apr 23 23:13:26.819055 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Apr 23 23:13:26.819063 kernel: thunder_xcv, ver 1.0 Apr 23 23:13:26.819071 kernel: thunder_bgx, ver 1.0 Apr 23 23:13:26.819078 kernel: nicpf, ver 1.0 Apr 23 23:13:26.819086 kernel: nicvf, ver 1.0 Apr 23 23:13:26.819175 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 23 23:13:26.819248 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-04-23T23:13:26 UTC (1776986006) Apr 23 23:13:26.819260 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 23 23:13:26.819270 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Apr 23 23:13:26.819288 kernel: watchdog: NMI not fully supported Apr 23 23:13:26.819297 kernel: watchdog: Hard watchdog permanently disabled Apr 23 23:13:26.819310 kernel: NET: Registered PF_INET6 protocol family Apr 23 23:13:26.819318 kernel: Segment Routing with IPv6 Apr 23 23:13:26.819326 kernel: In-situ OAM (IOAM) with IPv6 Apr 23 23:13:26.819336 kernel: NET: Registered PF_PACKET protocol family Apr 23 23:13:26.819345 kernel: Key type dns_resolver registered Apr 23 23:13:26.819353 kernel: registered taskstats version 1 Apr 23 23:13:26.819361 kernel: Loading compiled-in X.509 certificates Apr 23 23:13:26.819369 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: 1129832e4b4ea3c9ff0dc43e02ec7de2e4d9d907' Apr 23 23:13:26.819377 kernel: Demotion targets for Node 0: null Apr 23 23:13:26.819385 kernel: Key type .fscrypt registered Apr 23 23:13:26.819393 kernel: Key type fscrypt-provisioning registered Apr 23 23:13:26.819401 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 23 23:13:26.819416 kernel: ima: Allocated hash algorithm: sha1 Apr 23 23:13:26.819425 kernel: ima: No architecture policies found Apr 23 23:13:26.819433 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 23 23:13:26.819441 kernel: clk: Disabling unused clocks Apr 23 23:13:26.819449 kernel: PM: genpd: Disabling unused power domains Apr 23 23:13:26.819457 kernel: Warning: unable to open an initial console. Apr 23 23:13:26.819465 kernel: Freeing unused kernel memory: 39552K Apr 23 23:13:26.819474 kernel: Run /init as init process Apr 23 23:13:26.819482 kernel: with arguments: Apr 23 23:13:26.819492 kernel: /init Apr 23 23:13:26.819500 kernel: with environment: Apr 23 23:13:26.819508 kernel: HOME=/ Apr 23 23:13:26.819518 kernel: TERM=linux Apr 23 23:13:26.819528 systemd[1]: Successfully made /usr/ read-only. Apr 23 23:13:26.819546 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 23 23:13:26.819556 systemd[1]: Detected virtualization kvm. Apr 23 23:13:26.819567 systemd[1]: Detected architecture arm64. Apr 23 23:13:26.819575 systemd[1]: Running in initrd. Apr 23 23:13:26.819584 systemd[1]: No hostname configured, using default hostname. Apr 23 23:13:26.819596 systemd[1]: Hostname set to . Apr 23 23:13:26.819604 systemd[1]: Initializing machine ID from VM UUID. Apr 23 23:13:26.819613 systemd[1]: Queued start job for default target initrd.target. Apr 23 23:13:26.819621 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 23 23:13:26.819630 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 23 23:13:26.819639 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 23 23:13:26.819649 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 23 23:13:26.819658 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 23 23:13:26.819667 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 23 23:13:26.819677 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 23 23:13:26.819686 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 23 23:13:26.819695 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 23 23:13:26.819715 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 23 23:13:26.819724 systemd[1]: Reached target paths.target - Path Units. Apr 23 23:13:26.819732 systemd[1]: Reached target slices.target - Slice Units. Apr 23 23:13:26.819741 systemd[1]: Reached target swap.target - Swaps. Apr 23 23:13:26.819749 systemd[1]: Reached target timers.target - Timer Units. Apr 23 23:13:26.819758 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 23 23:13:26.819766 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 23 23:13:26.819774 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 23 23:13:26.819783 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 23 23:13:26.819794 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 23 23:13:26.819803 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 23 23:13:26.819811 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 23 23:13:26.819819 systemd[1]: Reached target sockets.target - Socket Units. Apr 23 23:13:26.819828 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 23 23:13:26.819836 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 23 23:13:26.819844 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 23 23:13:26.819853 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 23 23:13:26.819863 systemd[1]: Starting systemd-fsck-usr.service... Apr 23 23:13:26.819871 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 23 23:13:26.819880 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 23 23:13:26.819888 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 23 23:13:26.819897 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 23 23:13:26.819906 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 23 23:13:26.819941 systemd-journald[244]: Collecting audit messages is disabled. Apr 23 23:13:26.819964 systemd[1]: Finished systemd-fsck-usr.service. Apr 23 23:13:26.819972 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 23 23:13:26.819983 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 23 23:13:26.819991 kernel: Bridge firewalling registered Apr 23 23:13:26.819999 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 23 23:13:26.820007 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 23 23:13:26.820017 systemd-journald[244]: Journal started Apr 23 23:13:26.820035 systemd-journald[244]: Runtime Journal (/run/log/journal/15f9acc6d1a24afc88112ba1149debbe) is 8M, max 76.5M, 68.5M free. Apr 23 23:13:26.785232 systemd-modules-load[246]: Inserted module 'overlay' Apr 23 23:13:26.809084 systemd-modules-load[246]: Inserted module 'br_netfilter' Apr 23 23:13:26.823010 systemd[1]: Started systemd-journald.service - Journal Service. Apr 23 23:13:26.825753 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 23 23:13:26.829885 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 23 23:13:26.834077 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 23 23:13:26.836901 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 23 23:13:26.839323 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 23 23:13:26.843745 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 23 23:13:26.856989 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 23 23:13:26.860644 systemd-tmpfiles[270]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 23 23:13:26.864791 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 23 23:13:26.868166 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 23 23:13:26.873764 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 23 23:13:26.875891 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 23 23:13:26.906725 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=8669c84e6bfac0c003f3ced682d9b5c0fda27fc2948639441be65941607b4c3d Apr 23 23:13:26.921621 systemd-resolved[283]: Positive Trust Anchors: Apr 23 23:13:26.922320 systemd-resolved[283]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 23 23:13:26.922352 systemd-resolved[283]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 23 23:13:26.928062 systemd-resolved[283]: Defaulting to hostname 'linux'. Apr 23 23:13:26.931865 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 23 23:13:26.934083 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 23 23:13:27.007754 kernel: SCSI subsystem initialized Apr 23 23:13:27.011733 kernel: Loading iSCSI transport class v2.0-870. Apr 23 23:13:27.019825 kernel: iscsi: registered transport (tcp) Apr 23 23:13:27.032770 kernel: iscsi: registered transport (qla4xxx) Apr 23 23:13:27.032859 kernel: QLogic iSCSI HBA Driver Apr 23 23:13:27.054522 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 23 23:13:27.075595 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 23 23:13:27.080021 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 23 23:13:27.129197 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 23 23:13:27.130828 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 23 23:13:27.202762 kernel: raid6: neonx8 gen() 15691 MB/s Apr 23 23:13:27.219788 kernel: raid6: neonx4 gen() 15750 MB/s Apr 23 23:13:27.236755 kernel: raid6: neonx2 gen() 13167 MB/s Apr 23 23:13:27.253776 kernel: raid6: neonx1 gen() 10415 MB/s Apr 23 23:13:27.270792 kernel: raid6: int64x8 gen() 6873 MB/s Apr 23 23:13:27.287767 kernel: raid6: int64x4 gen() 7309 MB/s Apr 23 23:13:27.304787 kernel: raid6: int64x2 gen() 6076 MB/s Apr 23 23:13:27.321762 kernel: raid6: int64x1 gen() 5027 MB/s Apr 23 23:13:27.321844 kernel: raid6: using algorithm neonx4 gen() 15750 MB/s Apr 23 23:13:27.338787 kernel: raid6: .... xor() 12275 MB/s, rmw enabled Apr 23 23:13:27.338874 kernel: raid6: using neon recovery algorithm Apr 23 23:13:27.343934 kernel: xor: measuring software checksum speed Apr 23 23:13:27.343988 kernel: 8regs : 21647 MB/sec Apr 23 23:13:27.344012 kernel: 32regs : 21710 MB/sec Apr 23 23:13:27.344033 kernel: arm64_neon : 28089 MB/sec Apr 23 23:13:27.344759 kernel: xor: using function: arm64_neon (28089 MB/sec) Apr 23 23:13:27.397745 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 23 23:13:27.404525 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 23 23:13:27.407309 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 23 23:13:27.431640 systemd-udevd[494]: Using default interface naming scheme 'v255'. Apr 23 23:13:27.436778 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 23 23:13:27.441958 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 23 23:13:27.472977 dracut-pre-trigger[503]: rd.md=0: removing MD RAID activation Apr 23 23:13:27.504815 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 23 23:13:27.507564 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 23 23:13:27.572512 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 23 23:13:27.575497 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 23 23:13:27.665735 kernel: virtio_scsi virtio5: 2/0/0 default/read/poll queues Apr 23 23:13:27.672014 kernel: scsi host0: Virtio SCSI HBA Apr 23 23:13:27.684273 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 23 23:13:27.684357 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 23 23:13:27.694159 kernel: ACPI: bus type USB registered Apr 23 23:13:27.694213 kernel: usbcore: registered new interface driver usbfs Apr 23 23:13:27.694224 kernel: usbcore: registered new interface driver hub Apr 23 23:13:27.699183 kernel: usbcore: registered new device driver usb Apr 23 23:13:27.705915 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 23 23:13:27.706030 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 23 23:13:27.708870 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 23 23:13:27.710864 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 23 23:13:27.721868 kernel: sd 0:0:0:1: Power-on or device reset occurred Apr 23 23:13:27.722075 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Apr 23 23:13:27.723346 kernel: sd 0:0:0:1: [sda] Write Protect is off Apr 23 23:13:27.723488 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Apr 23 23:13:27.723575 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 23 23:13:27.734775 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 23 23:13:27.734824 kernel: GPT:17805311 != 80003071 Apr 23 23:13:27.734834 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 23 23:13:27.734845 kernel: GPT:17805311 != 80003071 Apr 23 23:13:27.734854 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 23 23:13:27.734863 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 23 23:13:27.735727 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Apr 23 23:13:27.745363 kernel: sr 0:0:0:0: Power-on or device reset occurred Apr 23 23:13:27.744034 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 23 23:13:27.750496 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Apr 23 23:13:27.750667 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 23 23:13:27.750820 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 23 23:13:27.750918 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 23 23:13:27.750938 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 23 23:13:27.752328 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 23 23:13:27.752478 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 23 23:13:27.752568 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Apr 23 23:13:27.752661 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 23 23:13:27.755587 kernel: hub 1-0:1.0: USB hub found Apr 23 23:13:27.755814 kernel: hub 1-0:1.0: 4 ports detected Apr 23 23:13:27.758754 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 23 23:13:27.758939 kernel: hub 2-0:1.0: USB hub found Apr 23 23:13:27.759046 kernel: hub 2-0:1.0: 4 ports detected Apr 23 23:13:27.815667 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 23 23:13:27.826133 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 23 23:13:27.834779 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 23 23:13:27.841941 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 23 23:13:27.842611 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 23 23:13:27.848902 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 23 23:13:27.853523 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 23 23:13:27.856283 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 23 23:13:27.857542 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 23 23:13:27.858362 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 23 23:13:27.861548 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 23 23:13:27.865909 disk-uuid[600]: Primary Header is updated. Apr 23 23:13:27.865909 disk-uuid[600]: Secondary Entries is updated. Apr 23 23:13:27.865909 disk-uuid[600]: Secondary Header is updated. Apr 23 23:13:27.876750 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 23 23:13:27.882225 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 23 23:13:27.889722 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 23 23:13:28.003755 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 23 23:13:28.138195 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Apr 23 23:13:28.138289 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 23 23:13:28.138560 kernel: usbcore: registered new interface driver usbhid Apr 23 23:13:28.139043 kernel: usbhid: USB HID core driver Apr 23 23:13:28.242779 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Apr 23 23:13:28.369741 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Apr 23 23:13:28.422761 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Apr 23 23:13:28.893793 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 23 23:13:28.894623 disk-uuid[601]: The operation has completed successfully. Apr 23 23:13:28.951905 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 23 23:13:28.952767 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 23 23:13:28.976848 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 23 23:13:28.994748 sh[626]: Success Apr 23 23:13:29.010000 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 23 23:13:29.010054 kernel: device-mapper: uevent: version 1.0.3 Apr 23 23:13:29.010066 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 23 23:13:29.019791 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Apr 23 23:13:29.065992 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 23 23:13:29.070438 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 23 23:13:29.087631 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 23 23:13:29.097931 kernel: BTRFS: device fsid 2db32ba8-c7e9-4b6a-ba75-58982c25581e devid 1 transid 32 /dev/mapper/usr (254:0) scanned by mount (638) Apr 23 23:13:29.099774 kernel: BTRFS info (device dm-0): first mount of filesystem 2db32ba8-c7e9-4b6a-ba75-58982c25581e Apr 23 23:13:29.099839 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 23 23:13:29.106973 kernel: BTRFS info (device dm-0 state E): enabling ssd optimizations Apr 23 23:13:29.107031 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 23 23:13:29.107055 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 23 23:13:29.109633 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 23 23:13:29.111095 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 23 23:13:29.112306 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 23 23:13:29.113840 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 23 23:13:29.115839 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 23 23:13:29.149728 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (671) Apr 23 23:13:29.151088 kernel: BTRFS info (device sda6): first mount of filesystem a3954155-494f-4049-93fc-7ec9255747d0 Apr 23 23:13:29.151136 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 23 23:13:29.157173 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 23 23:13:29.157220 kernel: BTRFS info (device sda6): turning on async discard Apr 23 23:13:29.158726 kernel: BTRFS info (device sda6): enabling free space tree Apr 23 23:13:29.164767 kernel: BTRFS info (device sda6): last unmount of filesystem a3954155-494f-4049-93fc-7ec9255747d0 Apr 23 23:13:29.167036 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 23 23:13:29.168611 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 23 23:13:29.264945 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 23 23:13:29.274829 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 23 23:13:29.312025 systemd-networkd[814]: lo: Link UP Apr 23 23:13:29.312035 systemd-networkd[814]: lo: Gained carrier Apr 23 23:13:29.313583 systemd-networkd[814]: Enumeration completed Apr 23 23:13:29.314012 systemd-networkd[814]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 23 23:13:29.314015 systemd-networkd[814]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 23 23:13:29.314573 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 23 23:13:29.314683 systemd-networkd[814]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 23 23:13:29.320504 ignition[716]: Ignition 2.22.0 Apr 23 23:13:29.314687 systemd-networkd[814]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 23 23:13:29.320511 ignition[716]: Stage: fetch-offline Apr 23 23:13:29.315210 systemd-networkd[814]: eth0: Link UP Apr 23 23:13:29.320545 ignition[716]: no configs at "/usr/lib/ignition/base.d" Apr 23 23:13:29.315401 systemd-networkd[814]: eth1: Link UP Apr 23 23:13:29.320553 ignition[716]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 23 23:13:29.315524 systemd[1]: Reached target network.target - Network. Apr 23 23:13:29.320631 ignition[716]: parsed url from cmdline: "" Apr 23 23:13:29.315535 systemd-networkd[814]: eth0: Gained carrier Apr 23 23:13:29.320634 ignition[716]: no config URL provided Apr 23 23:13:29.315544 systemd-networkd[814]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 23 23:13:29.320638 ignition[716]: reading system config file "/usr/lib/ignition/user.ign" Apr 23 23:13:29.321929 systemd-networkd[814]: eth1: Gained carrier Apr 23 23:13:29.320644 ignition[716]: no config at "/usr/lib/ignition/user.ign" Apr 23 23:13:29.321939 systemd-networkd[814]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 23 23:13:29.320649 ignition[716]: failed to fetch config: resource requires networking Apr 23 23:13:29.323765 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 23 23:13:29.321167 ignition[716]: Ignition finished successfully Apr 23 23:13:29.327896 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 23 23:13:29.354604 ignition[820]: Ignition 2.22.0 Apr 23 23:13:29.354623 ignition[820]: Stage: fetch Apr 23 23:13:29.354788 ignition[820]: no configs at "/usr/lib/ignition/base.d" Apr 23 23:13:29.354797 ignition[820]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 23 23:13:29.354893 ignition[820]: parsed url from cmdline: "" Apr 23 23:13:29.354897 ignition[820]: no config URL provided Apr 23 23:13:29.354901 ignition[820]: reading system config file "/usr/lib/ignition/user.ign" Apr 23 23:13:29.354909 ignition[820]: no config at "/usr/lib/ignition/user.ign" Apr 23 23:13:29.354933 ignition[820]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 23 23:13:29.355379 ignition[820]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 23 23:13:29.366824 systemd-networkd[814]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 23 23:13:29.376841 systemd-networkd[814]: eth0: DHCPv4 address 138.199.150.149/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 23 23:13:29.556528 ignition[820]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 23 23:13:29.563446 ignition[820]: GET result: OK Apr 23 23:13:29.563581 ignition[820]: parsing config with SHA512: efc17418c9ff4cf10b0f735653395003838f25edd16f5ae3f75a2ad6d03de74137f72857211a6eecb471968b093bb667a25ca4fd143c2b85f563148406f51422 Apr 23 23:13:29.571376 unknown[820]: fetched base config from "system" Apr 23 23:13:29.572412 unknown[820]: fetched base config from "system" Apr 23 23:13:29.572813 ignition[820]: fetch: fetch complete Apr 23 23:13:29.572423 unknown[820]: fetched user config from "hetzner" Apr 23 23:13:29.572817 ignition[820]: fetch: fetch passed Apr 23 23:13:29.572875 ignition[820]: Ignition finished successfully Apr 23 23:13:29.575095 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 23 23:13:29.576692 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 23 23:13:29.610532 ignition[828]: Ignition 2.22.0 Apr 23 23:13:29.611245 ignition[828]: Stage: kargs Apr 23 23:13:29.611725 ignition[828]: no configs at "/usr/lib/ignition/base.d" Apr 23 23:13:29.611736 ignition[828]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 23 23:13:29.613944 ignition[828]: kargs: kargs passed Apr 23 23:13:29.614002 ignition[828]: Ignition finished successfully Apr 23 23:13:29.616729 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 23 23:13:29.620007 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 23 23:13:29.651097 ignition[835]: Ignition 2.22.0 Apr 23 23:13:29.651115 ignition[835]: Stage: disks Apr 23 23:13:29.651287 ignition[835]: no configs at "/usr/lib/ignition/base.d" Apr 23 23:13:29.651298 ignition[835]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 23 23:13:29.652176 ignition[835]: disks: disks passed Apr 23 23:13:29.652226 ignition[835]: Ignition finished successfully Apr 23 23:13:29.655875 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 23 23:13:29.657368 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 23 23:13:29.658597 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 23 23:13:29.659388 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 23 23:13:29.661323 systemd[1]: Reached target sysinit.target - System Initialization. Apr 23 23:13:29.663150 systemd[1]: Reached target basic.target - Basic System. Apr 23 23:13:29.664835 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 23 23:13:29.692726 systemd-fsck[844]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Apr 23 23:13:29.697685 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 23 23:13:29.701070 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 23 23:13:29.777905 kernel: EXT4-fs (sda9): mounted filesystem 753efcb9-de86-4e47-981f-2dbd4690452d r/w with ordered data mode. Quota mode: none. Apr 23 23:13:29.779248 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 23 23:13:29.780883 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 23 23:13:29.783576 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 23 23:13:29.785470 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 23 23:13:29.793721 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 23 23:13:29.794829 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 23 23:13:29.794859 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 23 23:13:29.800937 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 23 23:13:29.805822 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 23 23:13:29.810928 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (852) Apr 23 23:13:29.815727 kernel: BTRFS info (device sda6): first mount of filesystem a3954155-494f-4049-93fc-7ec9255747d0 Apr 23 23:13:29.815773 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 23 23:13:29.827061 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 23 23:13:29.827131 kernel: BTRFS info (device sda6): turning on async discard Apr 23 23:13:29.829741 kernel: BTRFS info (device sda6): enabling free space tree Apr 23 23:13:29.834343 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 23 23:13:29.862250 initrd-setup-root[879]: cut: /sysroot/etc/passwd: No such file or directory Apr 23 23:13:29.863983 coreos-metadata[854]: Apr 23 23:13:29.863 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 23 23:13:29.866169 coreos-metadata[854]: Apr 23 23:13:29.866 INFO Fetch successful Apr 23 23:13:29.866169 coreos-metadata[854]: Apr 23 23:13:29.866 INFO wrote hostname ci-4459-2-4-n-08a122edc2 to /sysroot/etc/hostname Apr 23 23:13:29.869390 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 23 23:13:29.872607 initrd-setup-root[887]: cut: /sysroot/etc/group: No such file or directory Apr 23 23:13:29.878478 initrd-setup-root[894]: cut: /sysroot/etc/shadow: No such file or directory Apr 23 23:13:29.886525 initrd-setup-root[901]: cut: /sysroot/etc/gshadow: No such file or directory Apr 23 23:13:29.984205 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 23 23:13:29.987421 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 23 23:13:29.988828 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 23 23:13:30.012743 kernel: BTRFS info (device sda6): last unmount of filesystem a3954155-494f-4049-93fc-7ec9255747d0 Apr 23 23:13:30.030338 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 23 23:13:30.047375 ignition[969]: INFO : Ignition 2.22.0 Apr 23 23:13:30.047375 ignition[969]: INFO : Stage: mount Apr 23 23:13:30.050651 ignition[969]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 23 23:13:30.050651 ignition[969]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 23 23:13:30.050651 ignition[969]: INFO : mount: mount passed Apr 23 23:13:30.050651 ignition[969]: INFO : Ignition finished successfully Apr 23 23:13:30.051791 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 23 23:13:30.055502 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 23 23:13:30.098408 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 23 23:13:30.100832 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 23 23:13:30.122745 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (981) Apr 23 23:13:30.124364 kernel: BTRFS info (device sda6): first mount of filesystem a3954155-494f-4049-93fc-7ec9255747d0 Apr 23 23:13:30.124522 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 23 23:13:30.128127 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 23 23:13:30.128196 kernel: BTRFS info (device sda6): turning on async discard Apr 23 23:13:30.128223 kernel: BTRFS info (device sda6): enabling free space tree Apr 23 23:13:30.131585 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 23 23:13:30.165602 ignition[999]: INFO : Ignition 2.22.0 Apr 23 23:13:30.165602 ignition[999]: INFO : Stage: files Apr 23 23:13:30.166831 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 23 23:13:30.166831 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 23 23:13:30.166831 ignition[999]: DEBUG : files: compiled without relabeling support, skipping Apr 23 23:13:30.169674 ignition[999]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 23 23:13:30.169674 ignition[999]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 23 23:13:30.171817 ignition[999]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 23 23:13:30.172861 ignition[999]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 23 23:13:30.174096 unknown[999]: wrote ssh authorized keys file for user: core Apr 23 23:13:30.174987 ignition[999]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 23 23:13:30.178064 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 23 23:13:30.178064 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Apr 23 23:13:30.227054 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 23 23:13:30.311866 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 23 23:13:30.313335 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 23 23:13:30.313335 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Apr 23 23:13:30.420671 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 23 23:13:30.639896 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 23 23:13:30.642183 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 23 23:13:30.642183 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 23 23:13:30.642183 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 23 23:13:30.642183 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 23 23:13:30.642183 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 23 23:13:30.642183 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 23 23:13:30.642183 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 23 23:13:30.642183 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 23 23:13:30.642183 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 23 23:13:30.642183 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 23 23:13:30.642183 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Apr 23 23:13:30.642183 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Apr 23 23:13:30.642183 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Apr 23 23:13:30.642183 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-arm64.raw: attempt #1 Apr 23 23:13:30.701887 systemd-networkd[814]: eth1: Gained IPv6LL Apr 23 23:13:30.746453 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 23 23:13:30.830001 systemd-networkd[814]: eth0: Gained IPv6LL Apr 23 23:13:31.320399 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Apr 23 23:13:31.320399 ignition[999]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 23 23:13:31.327795 ignition[999]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 23 23:13:31.327795 ignition[999]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 23 23:13:31.327795 ignition[999]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 23 23:13:31.327795 ignition[999]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Apr 23 23:13:31.327795 ignition[999]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 23 23:13:31.327795 ignition[999]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 23 23:13:31.327795 ignition[999]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Apr 23 23:13:31.327795 ignition[999]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Apr 23 23:13:31.327795 ignition[999]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Apr 23 23:13:31.327795 ignition[999]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 23 23:13:31.327795 ignition[999]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 23 23:13:31.327795 ignition[999]: INFO : files: files passed Apr 23 23:13:31.327795 ignition[999]: INFO : Ignition finished successfully Apr 23 23:13:31.325979 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 23 23:13:31.328062 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 23 23:13:31.332928 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 23 23:13:31.350102 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 23 23:13:31.350939 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 23 23:13:31.356723 initrd-setup-root-after-ignition[1028]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 23 23:13:31.356723 initrd-setup-root-after-ignition[1028]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 23 23:13:31.360045 initrd-setup-root-after-ignition[1032]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 23 23:13:31.362460 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 23 23:13:31.363487 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 23 23:13:31.365159 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 23 23:13:31.422467 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 23 23:13:31.423775 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 23 23:13:31.425390 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 23 23:13:31.426662 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 23 23:13:31.428839 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 23 23:13:31.429577 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 23 23:13:31.454552 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 23 23:13:31.457174 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 23 23:13:31.476391 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 23 23:13:31.477160 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 23 23:13:31.478948 systemd[1]: Stopped target timers.target - Timer Units. Apr 23 23:13:31.480113 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 23 23:13:31.480252 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 23 23:13:31.481863 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 23 23:13:31.482527 systemd[1]: Stopped target basic.target - Basic System. Apr 23 23:13:31.483572 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 23 23:13:31.484617 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 23 23:13:31.485657 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 23 23:13:31.486770 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 23 23:13:31.487953 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 23 23:13:31.489035 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 23 23:13:31.490269 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 23 23:13:31.491354 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 23 23:13:31.492450 systemd[1]: Stopped target swap.target - Swaps. Apr 23 23:13:31.493381 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 23 23:13:31.493498 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 23 23:13:31.494768 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 23 23:13:31.495418 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 23 23:13:31.496435 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 23 23:13:31.496513 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 23 23:13:31.497547 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 23 23:13:31.497656 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 23 23:13:31.499298 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 23 23:13:31.499406 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 23 23:13:31.500585 systemd[1]: ignition-files.service: Deactivated successfully. Apr 23 23:13:31.500676 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 23 23:13:31.501796 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 23 23:13:31.501886 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 23 23:13:31.503690 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 23 23:13:31.507933 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 23 23:13:31.510251 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 23 23:13:31.510392 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 23 23:13:31.511569 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 23 23:13:31.511653 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 23 23:13:31.519644 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 23 23:13:31.522739 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 23 23:13:31.532850 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 23 23:13:31.535587 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 23 23:13:31.535674 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 23 23:13:31.543858 ignition[1052]: INFO : Ignition 2.22.0 Apr 23 23:13:31.543858 ignition[1052]: INFO : Stage: umount Apr 23 23:13:31.546840 ignition[1052]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 23 23:13:31.546840 ignition[1052]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 23 23:13:31.546840 ignition[1052]: INFO : umount: umount passed Apr 23 23:13:31.546840 ignition[1052]: INFO : Ignition finished successfully Apr 23 23:13:31.546523 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 23 23:13:31.546740 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 23 23:13:31.549511 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 23 23:13:31.549659 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 23 23:13:31.551768 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 23 23:13:31.551813 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 23 23:13:31.552689 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 23 23:13:31.552746 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 23 23:13:31.553697 systemd[1]: Stopped target network.target - Network. Apr 23 23:13:31.554648 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 23 23:13:31.554694 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 23 23:13:31.555715 systemd[1]: Stopped target paths.target - Path Units. Apr 23 23:13:31.556617 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 23 23:13:31.557115 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 23 23:13:31.557818 systemd[1]: Stopped target slices.target - Slice Units. Apr 23 23:13:31.558803 systemd[1]: Stopped target sockets.target - Socket Units. Apr 23 23:13:31.559691 systemd[1]: iscsid.socket: Deactivated successfully. Apr 23 23:13:31.559765 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 23 23:13:31.560645 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 23 23:13:31.560674 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 23 23:13:31.561573 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 23 23:13:31.561626 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 23 23:13:31.562629 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 23 23:13:31.562680 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 23 23:13:31.563798 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 23 23:13:31.563842 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 23 23:13:31.564792 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 23 23:13:31.565787 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 23 23:13:31.577058 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 23 23:13:31.577692 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 23 23:13:31.582141 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 23 23:13:31.582544 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 23 23:13:31.582674 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 23 23:13:31.586900 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 23 23:13:31.587534 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 23 23:13:31.588422 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 23 23:13:31.588462 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 23 23:13:31.590339 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 23 23:13:31.592235 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 23 23:13:31.592297 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 23 23:13:31.595947 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 23 23:13:31.596011 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 23 23:13:31.597866 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 23 23:13:31.597913 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 23 23:13:31.598536 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 23 23:13:31.598575 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 23 23:13:31.600699 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 23 23:13:31.604321 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 23 23:13:31.604397 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 23 23:13:31.617994 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 23 23:13:31.618253 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 23 23:13:31.621422 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 23 23:13:31.621566 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 23 23:13:31.622957 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 23 23:13:31.622992 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 23 23:13:31.623943 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 23 23:13:31.623968 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 23 23:13:31.624892 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 23 23:13:31.624932 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 23 23:13:31.626394 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 23 23:13:31.626435 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 23 23:13:31.627988 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 23 23:13:31.628035 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 23 23:13:31.630328 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 23 23:13:31.631812 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 23 23:13:31.631870 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Apr 23 23:13:31.635088 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 23 23:13:31.635139 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 23 23:13:31.636613 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 23 23:13:31.636652 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 23 23:13:31.638597 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 23 23:13:31.638633 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 23 23:13:31.640570 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 23 23:13:31.640624 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 23 23:13:31.646372 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Apr 23 23:13:31.646469 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Apr 23 23:13:31.646549 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Apr 23 23:13:31.646620 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 23 23:13:31.648884 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 23 23:13:31.648976 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 23 23:13:31.650109 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 23 23:13:31.655279 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 23 23:13:31.680267 systemd[1]: Switching root. Apr 23 23:13:31.711885 systemd-journald[244]: Journal stopped Apr 23 23:13:32.663812 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Apr 23 23:13:32.663883 kernel: SELinux: policy capability network_peer_controls=1 Apr 23 23:13:32.663895 kernel: SELinux: policy capability open_perms=1 Apr 23 23:13:32.663904 kernel: SELinux: policy capability extended_socket_class=1 Apr 23 23:13:32.663989 kernel: SELinux: policy capability always_check_network=0 Apr 23 23:13:32.664002 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 23 23:13:32.664016 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 23 23:13:32.664025 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 23 23:13:32.664035 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 23 23:13:32.664048 kernel: SELinux: policy capability userspace_initial_context=0 Apr 23 23:13:32.664063 kernel: audit: type=1403 audit(1776986011.891:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 23 23:13:32.664073 systemd[1]: Successfully loaded SELinux policy in 50.180ms. Apr 23 23:13:32.664086 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.204ms. Apr 23 23:13:32.664098 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 23 23:13:32.664111 systemd[1]: Detected virtualization kvm. Apr 23 23:13:32.664124 systemd[1]: Detected architecture arm64. Apr 23 23:13:32.664133 systemd[1]: Detected first boot. Apr 23 23:13:32.664147 systemd[1]: Hostname set to . Apr 23 23:13:32.664156 systemd[1]: Initializing machine ID from VM UUID. Apr 23 23:13:32.664166 zram_generator::config[1095]: No configuration found. Apr 23 23:13:32.664178 kernel: NET: Registered PF_VSOCK protocol family Apr 23 23:13:32.664199 systemd[1]: Populated /etc with preset unit settings. Apr 23 23:13:32.664214 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 23 23:13:32.664223 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 23 23:13:32.664233 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 23 23:13:32.664243 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 23 23:13:32.664252 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 23 23:13:32.664262 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 23 23:13:32.664275 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 23 23:13:32.664285 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 23 23:13:32.664295 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 23 23:13:32.664304 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 23 23:13:32.664314 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 23 23:13:32.664328 systemd[1]: Created slice user.slice - User and Session Slice. Apr 23 23:13:32.664341 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 23 23:13:32.664351 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 23 23:13:32.664362 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 23 23:13:32.664372 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 23 23:13:32.664382 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 23 23:13:32.664392 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 23 23:13:32.664402 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Apr 23 23:13:32.664412 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 23 23:13:32.664423 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 23 23:13:32.664433 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 23 23:13:32.664442 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 23 23:13:32.664452 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 23 23:13:32.664462 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 23 23:13:32.664471 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 23 23:13:32.664481 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 23 23:13:32.664491 systemd[1]: Reached target slices.target - Slice Units. Apr 23 23:13:32.664500 systemd[1]: Reached target swap.target - Swaps. Apr 23 23:13:32.664510 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 23 23:13:32.664521 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 23 23:13:32.664530 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 23 23:13:32.664540 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 23 23:13:32.664551 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 23 23:13:32.664564 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 23 23:13:32.664575 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 23 23:13:32.664584 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 23 23:13:32.664594 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 23 23:13:32.664604 systemd[1]: Mounting media.mount - External Media Directory... Apr 23 23:13:32.664615 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 23 23:13:32.664625 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 23 23:13:32.664634 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 23 23:13:32.664644 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 23 23:13:32.664655 systemd[1]: Reached target machines.target - Containers. Apr 23 23:13:32.664664 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 23 23:13:32.664674 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 23 23:13:32.664687 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 23 23:13:32.664696 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 23 23:13:32.665771 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 23 23:13:32.665797 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 23 23:13:32.665807 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 23 23:13:32.665818 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 23 23:13:32.665828 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 23 23:13:32.665838 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 23 23:13:32.665856 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 23 23:13:32.665866 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 23 23:13:32.665876 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 23 23:13:32.665886 systemd[1]: Stopped systemd-fsck-usr.service. Apr 23 23:13:32.665897 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 23 23:13:32.665907 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 23 23:13:32.665917 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 23 23:13:32.665927 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 23 23:13:32.665943 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 23 23:13:32.665954 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 23 23:13:32.665964 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 23 23:13:32.665975 systemd[1]: verity-setup.service: Deactivated successfully. Apr 23 23:13:32.665985 systemd[1]: Stopped verity-setup.service. Apr 23 23:13:32.665994 kernel: fuse: init (API version 7.41) Apr 23 23:13:32.666005 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 23 23:13:32.666015 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 23 23:13:32.666025 systemd[1]: Mounted media.mount - External Media Directory. Apr 23 23:13:32.666034 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 23 23:13:32.666044 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 23 23:13:32.666055 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 23 23:13:32.666065 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 23 23:13:32.666076 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 23 23:13:32.666086 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 23 23:13:32.666095 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 23 23:13:32.666105 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 23 23:13:32.666114 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 23 23:13:32.666124 kernel: ACPI: bus type drm_connector registered Apr 23 23:13:32.666133 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 23 23:13:32.666146 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 23 23:13:32.666155 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 23 23:13:32.666165 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 23 23:13:32.666175 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 23 23:13:32.666195 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 23 23:13:32.666208 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 23 23:13:32.666219 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 23 23:13:32.666260 systemd-journald[1159]: Collecting audit messages is disabled. Apr 23 23:13:32.666284 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 23 23:13:32.666294 kernel: loop: module loaded Apr 23 23:13:32.666303 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 23 23:13:32.666313 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 23 23:13:32.666323 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 23 23:13:32.666333 systemd-journald[1159]: Journal started Apr 23 23:13:32.666357 systemd-journald[1159]: Runtime Journal (/run/log/journal/15f9acc6d1a24afc88112ba1149debbe) is 8M, max 76.5M, 68.5M free. Apr 23 23:13:32.391443 systemd[1]: Queued start job for default target multi-user.target. Apr 23 23:13:32.401131 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 23 23:13:32.401691 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 23 23:13:32.675794 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 23 23:13:32.675841 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 23 23:13:32.680291 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 23 23:13:32.680349 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 23 23:13:32.684257 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 23 23:13:32.691734 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 23 23:13:32.698570 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 23 23:13:32.710724 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 23 23:13:32.710798 systemd[1]: Started systemd-journald.service - Journal Service. Apr 23 23:13:32.715162 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 23 23:13:32.717078 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 23 23:13:32.717285 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 23 23:13:32.718433 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 23 23:13:32.718570 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 23 23:13:32.721103 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 23 23:13:32.722988 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 23 23:13:32.724997 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 23 23:13:32.733966 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 23 23:13:32.756239 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 23 23:13:32.761470 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 23 23:13:32.766813 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 23 23:13:32.767690 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 23 23:13:32.770031 kernel: loop0: detected capacity change from 0 to 100632 Apr 23 23:13:32.772129 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 23 23:13:32.799728 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 23 23:13:32.805507 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 23 23:13:32.806020 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Apr 23 23:13:32.806030 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Apr 23 23:13:32.809254 systemd-journald[1159]: Time spent on flushing to /var/log/journal/15f9acc6d1a24afc88112ba1149debbe is 55.454ms for 1189 entries. Apr 23 23:13:32.809254 systemd-journald[1159]: System Journal (/var/log/journal/15f9acc6d1a24afc88112ba1149debbe) is 8M, max 584.8M, 576.8M free. Apr 23 23:13:32.878964 systemd-journald[1159]: Received client request to flush runtime journal. Apr 23 23:13:32.879025 kernel: loop1: detected capacity change from 0 to 119840 Apr 23 23:13:32.818089 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 23 23:13:32.822512 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 23 23:13:32.846801 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 23 23:13:32.881353 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 23 23:13:32.888798 kernel: loop2: detected capacity change from 0 to 8 Apr 23 23:13:32.900954 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 23 23:13:32.908305 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 23 23:13:32.911380 kernel: loop3: detected capacity change from 0 to 200864 Apr 23 23:13:32.953973 kernel: loop4: detected capacity change from 0 to 100632 Apr 23 23:13:32.955678 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Apr 23 23:13:32.956004 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Apr 23 23:13:32.960862 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 23 23:13:32.976784 kernel: loop5: detected capacity change from 0 to 119840 Apr 23 23:13:32.999733 kernel: loop6: detected capacity change from 0 to 8 Apr 23 23:13:33.005789 kernel: loop7: detected capacity change from 0 to 200864 Apr 23 23:13:33.032832 (sd-merge)[1242]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 23 23:13:33.033307 (sd-merge)[1242]: Merged extensions into '/usr'. Apr 23 23:13:33.038307 systemd[1]: Reload requested from client PID 1195 ('systemd-sysext') (unit systemd-sysext.service)... Apr 23 23:13:33.038325 systemd[1]: Reloading... Apr 23 23:13:33.136220 zram_generator::config[1275]: No configuration found. Apr 23 23:13:33.225146 ldconfig[1191]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 23 23:13:33.299839 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 23 23:13:33.300100 systemd[1]: Reloading finished in 261 ms. Apr 23 23:13:33.319755 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 23 23:13:33.320824 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 23 23:13:33.332888 systemd[1]: Starting ensure-sysext.service... Apr 23 23:13:33.338027 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 23 23:13:33.365487 systemd[1]: Reload requested from client PID 1308 ('systemctl') (unit ensure-sysext.service)... Apr 23 23:13:33.365509 systemd[1]: Reloading... Apr 23 23:13:33.386319 systemd-tmpfiles[1309]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 23 23:13:33.386666 systemd-tmpfiles[1309]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 23 23:13:33.387003 systemd-tmpfiles[1309]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 23 23:13:33.387323 systemd-tmpfiles[1309]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 23 23:13:33.388023 systemd-tmpfiles[1309]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 23 23:13:33.388371 systemd-tmpfiles[1309]: ACLs are not supported, ignoring. Apr 23 23:13:33.388481 systemd-tmpfiles[1309]: ACLs are not supported, ignoring. Apr 23 23:13:33.391758 systemd-tmpfiles[1309]: Detected autofs mount point /boot during canonicalization of boot. Apr 23 23:13:33.392129 systemd-tmpfiles[1309]: Skipping /boot Apr 23 23:13:33.399149 systemd-tmpfiles[1309]: Detected autofs mount point /boot during canonicalization of boot. Apr 23 23:13:33.399272 systemd-tmpfiles[1309]: Skipping /boot Apr 23 23:13:33.431725 zram_generator::config[1335]: No configuration found. Apr 23 23:13:33.603240 systemd[1]: Reloading finished in 237 ms. Apr 23 23:13:33.617422 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 23 23:13:33.637745 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 23 23:13:33.645356 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 23 23:13:33.649943 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 23 23:13:33.656420 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 23 23:13:33.660082 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 23 23:13:33.663472 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 23 23:13:33.666460 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 23 23:13:33.673244 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 23 23:13:33.682020 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 23 23:13:33.685131 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 23 23:13:33.692801 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 23 23:13:33.693923 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 23 23:13:33.694044 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 23 23:13:33.698339 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 23 23:13:33.698507 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 23 23:13:33.698583 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 23 23:13:33.701510 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 23 23:13:33.709262 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 23 23:13:33.713135 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 23 23:13:33.714240 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 23 23:13:33.714404 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 23 23:13:33.725329 systemd[1]: Finished ensure-sysext.service. Apr 23 23:13:33.727546 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 23 23:13:33.733407 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 23 23:13:33.736567 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 23 23:13:33.744568 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 23 23:13:33.746653 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 23 23:13:33.747951 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 23 23:13:33.750599 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 23 23:13:33.751949 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 23 23:13:33.752996 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 23 23:13:33.763034 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 23 23:13:33.768607 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 23 23:13:33.769881 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 23 23:13:33.773079 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 23 23:13:33.773161 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 23 23:13:33.782972 augenrules[1414]: No rules Apr 23 23:13:33.784657 systemd-udevd[1379]: Using default interface naming scheme 'v255'. Apr 23 23:13:33.785316 systemd[1]: audit-rules.service: Deactivated successfully. Apr 23 23:13:33.785548 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 23 23:13:33.794029 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 23 23:13:33.804824 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 23 23:13:33.808213 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 23 23:13:33.815626 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 23 23:13:33.829312 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 23 23:13:33.833769 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 23 23:13:33.968761 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Apr 23 23:13:33.968870 systemd-networkd[1430]: lo: Link UP Apr 23 23:13:33.968873 systemd-networkd[1430]: lo: Gained carrier Apr 23 23:13:33.970502 systemd-networkd[1430]: Enumeration completed Apr 23 23:13:33.970598 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 23 23:13:33.973934 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 23 23:13:33.976832 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 23 23:13:33.990807 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 23 23:13:33.992927 systemd[1]: Reached target time-set.target - System Time Set. Apr 23 23:13:34.012740 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 23 23:13:34.017658 systemd-resolved[1378]: Positive Trust Anchors: Apr 23 23:13:34.017966 systemd-resolved[1378]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 23 23:13:34.018064 systemd-resolved[1378]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 23 23:13:34.022917 systemd-resolved[1378]: Using system hostname 'ci-4459-2-4-n-08a122edc2'. Apr 23 23:13:34.024417 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 23 23:13:34.026219 systemd[1]: Reached target network.target - Network. Apr 23 23:13:34.026754 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 23 23:13:34.027798 systemd[1]: Reached target sysinit.target - System Initialization. Apr 23 23:13:34.028868 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 23 23:13:34.030058 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 23 23:13:34.030887 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 23 23:13:34.031521 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 23 23:13:34.033871 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 23 23:13:34.034571 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 23 23:13:34.034604 systemd[1]: Reached target paths.target - Path Units. Apr 23 23:13:34.035769 systemd[1]: Reached target timers.target - Timer Units. Apr 23 23:13:34.041441 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 23 23:13:34.045021 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 23 23:13:34.048606 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 23 23:13:34.050007 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 23 23:13:34.051401 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 23 23:13:34.055626 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 23 23:13:34.058220 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 23 23:13:34.060397 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 23 23:13:34.065063 systemd[1]: Reached target sockets.target - Socket Units. Apr 23 23:13:34.067815 systemd[1]: Reached target basic.target - Basic System. Apr 23 23:13:34.068555 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 23 23:13:34.068591 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 23 23:13:34.070100 systemd[1]: Starting containerd.service - containerd container runtime... Apr 23 23:13:34.072973 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 23 23:13:34.075551 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 23 23:13:34.077830 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 23 23:13:34.082904 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 23 23:13:34.085957 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 23 23:13:34.087810 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 23 23:13:34.092953 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 23 23:13:34.098753 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 23 23:13:34.109130 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 23 23:13:34.112010 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 23 23:13:34.117916 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 23 23:13:34.120649 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 23 23:13:34.121113 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 23 23:13:34.125923 systemd[1]: Starting update-engine.service - Update Engine... Apr 23 23:13:34.139029 jq[1477]: false Apr 23 23:13:34.139871 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 23 23:13:34.151732 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 23 23:13:34.152688 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 23 23:13:34.154070 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 23 23:13:34.167666 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 23 23:13:34.167902 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 23 23:13:34.171420 systemd[1]: motdgen.service: Deactivated successfully. Apr 23 23:13:34.173004 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 23 23:13:34.186849 coreos-metadata[1474]: Apr 23 23:13:34.182 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 23 23:13:34.191601 coreos-metadata[1474]: Apr 23 23:13:34.190 INFO Failed to fetch: error sending request for url (http://169.254.169.254/hetzner/v1/metadata) Apr 23 23:13:34.196755 update_engine[1485]: I20260423 23:13:34.195281 1485 main.cc:92] Flatcar Update Engine starting Apr 23 23:13:34.197002 jq[1489]: true Apr 23 23:13:34.211105 (ntainerd)[1511]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 23 23:13:34.213122 tar[1500]: linux-arm64/LICENSE Apr 23 23:13:34.213122 tar[1500]: linux-arm64/helm Apr 23 23:13:34.231104 extend-filesystems[1478]: Found /dev/sda6 Apr 23 23:13:34.251350 extend-filesystems[1478]: Found /dev/sda9 Apr 23 23:13:34.251350 extend-filesystems[1478]: Checking size of /dev/sda9 Apr 23 23:13:34.257045 jq[1512]: true Apr 23 23:13:34.261955 dbus-daemon[1475]: [system] SELinux support is enabled Apr 23 23:13:34.262289 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 23 23:13:34.266934 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 23 23:13:34.268546 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 23 23:13:34.269531 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 23 23:13:34.269544 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 23 23:13:34.289375 systemd-networkd[1430]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 23 23:13:34.289389 systemd-networkd[1430]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 23 23:13:34.291129 systemd-networkd[1430]: eth0: Link UP Apr 23 23:13:34.292967 update_engine[1485]: I20260423 23:13:34.292333 1485 update_check_scheduler.cc:74] Next update check in 5m58s Apr 23 23:13:34.291307 systemd-networkd[1430]: eth0: Gained carrier Apr 23 23:13:34.291325 systemd-networkd[1430]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 23 23:13:34.291332 systemd[1]: Started update-engine.service - Update Engine. Apr 23 23:13:34.296695 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 23 23:13:34.309375 extend-filesystems[1478]: Resized partition /dev/sda9 Apr 23 23:13:34.311272 extend-filesystems[1528]: resize2fs 1.47.3 (8-Jul-2025) Apr 23 23:13:34.327407 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Apr 23 23:13:34.429031 systemd-networkd[1430]: eth0: DHCPv4 address 138.199.150.149/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 23 23:13:34.430725 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Apr 23 23:13:34.431147 systemd-timesyncd[1403]: Network configuration changed, trying to establish connection. Apr 23 23:13:34.443648 containerd[1511]: time="2026-04-23T23:13:34Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 23 23:13:34.454754 containerd[1511]: time="2026-04-23T23:13:34.447061720Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Apr 23 23:13:34.454860 bash[1546]: Updated "/home/core/.ssh/authorized_keys" Apr 23 23:13:34.449076 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 23 23:13:34.455082 extend-filesystems[1528]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 23 23:13:34.455082 extend-filesystems[1528]: old_desc_blocks = 1, new_desc_blocks = 5 Apr 23 23:13:34.455082 extend-filesystems[1528]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Apr 23 23:13:34.450760 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 23 23:13:34.460517 extend-filesystems[1478]: Resized filesystem in /dev/sda9 Apr 23 23:13:34.461855 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 23 23:13:34.465518 systemd[1]: Starting sshkeys.service... Apr 23 23:13:34.473600 containerd[1511]: time="2026-04-23T23:13:34.473562080Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.96µs" Apr 23 23:13:34.473888 containerd[1511]: time="2026-04-23T23:13:34.473866400Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 23 23:13:34.474352 containerd[1511]: time="2026-04-23T23:13:34.474328600Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 23 23:13:34.474557 containerd[1511]: time="2026-04-23T23:13:34.474536840Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 23 23:13:34.474840 containerd[1511]: time="2026-04-23T23:13:34.474820000Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 23 23:13:34.474941 containerd[1511]: time="2026-04-23T23:13:34.474925560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 23 23:13:34.475144 containerd[1511]: time="2026-04-23T23:13:34.475120120Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 23 23:13:34.475587 containerd[1511]: time="2026-04-23T23:13:34.475564480Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 23 23:13:34.475908 containerd[1511]: time="2026-04-23T23:13:34.475884040Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 23 23:13:34.477401 containerd[1511]: time="2026-04-23T23:13:34.476817120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 23 23:13:34.477401 containerd[1511]: time="2026-04-23T23:13:34.476845720Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 23 23:13:34.477401 containerd[1511]: time="2026-04-23T23:13:34.476856480Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 23 23:13:34.477401 containerd[1511]: time="2026-04-23T23:13:34.476942200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 23 23:13:34.477401 containerd[1511]: time="2026-04-23T23:13:34.477131760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 23 23:13:34.477401 containerd[1511]: time="2026-04-23T23:13:34.477170720Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 23 23:13:34.477401 containerd[1511]: time="2026-04-23T23:13:34.477209880Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 23 23:13:34.477401 containerd[1511]: time="2026-04-23T23:13:34.477246640Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 23 23:13:34.477795 containerd[1511]: time="2026-04-23T23:13:34.477775480Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 23 23:13:34.477929 containerd[1511]: time="2026-04-23T23:13:34.477912080Z" level=info msg="metadata content store policy set" policy=shared Apr 23 23:13:34.484792 containerd[1511]: time="2026-04-23T23:13:34.484038640Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 23 23:13:34.484792 containerd[1511]: time="2026-04-23T23:13:34.484103800Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 23 23:13:34.484792 containerd[1511]: time="2026-04-23T23:13:34.484120480Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 23 23:13:34.484792 containerd[1511]: time="2026-04-23T23:13:34.484132480Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 23 23:13:34.484792 containerd[1511]: time="2026-04-23T23:13:34.484144080Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 23 23:13:34.484792 containerd[1511]: time="2026-04-23T23:13:34.484202320Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 23 23:13:34.484792 containerd[1511]: time="2026-04-23T23:13:34.484220000Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 23 23:13:34.484792 containerd[1511]: time="2026-04-23T23:13:34.484232240Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 23 23:13:34.484792 containerd[1511]: time="2026-04-23T23:13:34.484243880Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 23 23:13:34.484792 containerd[1511]: time="2026-04-23T23:13:34.484257160Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 23 23:13:34.484792 containerd[1511]: time="2026-04-23T23:13:34.484276840Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 23 23:13:34.484792 containerd[1511]: time="2026-04-23T23:13:34.484296920Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 23 23:13:34.484792 containerd[1511]: time="2026-04-23T23:13:34.484411680Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 23 23:13:34.484792 containerd[1511]: time="2026-04-23T23:13:34.484431520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 23 23:13:34.485096 containerd[1511]: time="2026-04-23T23:13:34.484446960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 23 23:13:34.485096 containerd[1511]: time="2026-04-23T23:13:34.484458160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 23 23:13:34.485096 containerd[1511]: time="2026-04-23T23:13:34.484469720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 23 23:13:34.485096 containerd[1511]: time="2026-04-23T23:13:34.484482760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 23 23:13:34.485096 containerd[1511]: time="2026-04-23T23:13:34.484496000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 23 23:13:34.485096 containerd[1511]: time="2026-04-23T23:13:34.484507200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 23 23:13:34.485096 containerd[1511]: time="2026-04-23T23:13:34.484518880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 23 23:13:34.485096 containerd[1511]: time="2026-04-23T23:13:34.484530840Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 23 23:13:34.485096 containerd[1511]: time="2026-04-23T23:13:34.484542080Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 23 23:13:34.488811 containerd[1511]: time="2026-04-23T23:13:34.486678240Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 23 23:13:34.488811 containerd[1511]: time="2026-04-23T23:13:34.486729240Z" level=info msg="Start snapshots syncer" Apr 23 23:13:34.488811 containerd[1511]: time="2026-04-23T23:13:34.486762920Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 23 23:13:34.488919 containerd[1511]: time="2026-04-23T23:13:34.487131480Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 23 23:13:34.488919 containerd[1511]: time="2026-04-23T23:13:34.487198560Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 23 23:13:34.488919 containerd[1511]: time="2026-04-23T23:13:34.488681840Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 23 23:13:34.489385 containerd[1511]: time="2026-04-23T23:13:34.489139880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 23 23:13:34.489385 containerd[1511]: time="2026-04-23T23:13:34.489214440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 23 23:13:34.489385 containerd[1511]: time="2026-04-23T23:13:34.489229240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 23 23:13:34.489385 containerd[1511]: time="2026-04-23T23:13:34.489240120Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 23 23:13:34.489385 containerd[1511]: time="2026-04-23T23:13:34.489251840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 23 23:13:34.489385 containerd[1511]: time="2026-04-23T23:13:34.489262320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 23 23:13:34.489385 containerd[1511]: time="2026-04-23T23:13:34.489277360Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 23 23:13:34.489385 containerd[1511]: time="2026-04-23T23:13:34.489311560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 23 23:13:34.489385 containerd[1511]: time="2026-04-23T23:13:34.489326960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 23 23:13:34.489385 containerd[1511]: time="2026-04-23T23:13:34.489338280Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 23 23:13:34.489629 containerd[1511]: time="2026-04-23T23:13:34.489612240Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 23 23:13:34.489779 containerd[1511]: time="2026-04-23T23:13:34.489760800Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 23 23:13:34.489834 containerd[1511]: time="2026-04-23T23:13:34.489821320Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 23 23:13:34.489882 containerd[1511]: time="2026-04-23T23:13:34.489869040Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 23 23:13:34.489942 containerd[1511]: time="2026-04-23T23:13:34.489930120Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 23 23:13:34.490005 containerd[1511]: time="2026-04-23T23:13:34.489994080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 23 23:13:34.490053 containerd[1511]: time="2026-04-23T23:13:34.490042400Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 23 23:13:34.490217 containerd[1511]: time="2026-04-23T23:13:34.490202440Z" level=info msg="runtime interface created" Apr 23 23:13:34.490269 containerd[1511]: time="2026-04-23T23:13:34.490259840Z" level=info msg="created NRI interface" Apr 23 23:13:34.490316 containerd[1511]: time="2026-04-23T23:13:34.490305600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 23 23:13:34.490362 containerd[1511]: time="2026-04-23T23:13:34.490353240Z" level=info msg="Connect containerd service" Apr 23 23:13:34.491874 containerd[1511]: time="2026-04-23T23:13:34.491303280Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 23 23:13:34.492328 containerd[1511]: time="2026-04-23T23:13:34.492299080Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 23 23:13:34.498138 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 23 23:13:34.501975 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 23 23:13:34.516070 systemd-timesyncd[1403]: Contacted time server 193.203.3.170:123 (0.flatcar.pool.ntp.org). Apr 23 23:13:34.516139 systemd-timesyncd[1403]: Initial clock synchronization to Thu 2026-04-23 23:13:34.381089 UTC. Apr 23 23:13:34.581537 coreos-metadata[1558]: Apr 23 23:13:34.581 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 23 23:13:34.585735 coreos-metadata[1558]: Apr 23 23:13:34.585 INFO Fetch successful Apr 23 23:13:34.588639 unknown[1558]: wrote ssh authorized keys file for user: core Apr 23 23:13:34.605764 systemd-logind[1483]: New seat seat0. Apr 23 23:13:34.610231 systemd[1]: Started systemd-logind.service - User Login Management. Apr 23 23:13:34.646333 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 23 23:13:34.654459 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 23 23:13:34.657148 systemd-networkd[1430]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 23 23:13:34.657290 systemd-networkd[1430]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 23 23:13:34.660519 update-ssh-keys[1568]: Updated "/home/core/.ssh/authorized_keys" Apr 23 23:13:34.659627 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 23 23:13:34.660081 systemd-networkd[1430]: eth1: Link UP Apr 23 23:13:34.660402 systemd-networkd[1430]: eth1: Gained carrier Apr 23 23:13:34.660423 systemd-networkd[1430]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 23 23:13:34.666433 systemd[1]: Finished sshkeys.service. Apr 23 23:13:34.701669 containerd[1511]: time="2026-04-23T23:13:34.700898280Z" level=info msg="Start subscribing containerd event" Apr 23 23:13:34.701669 containerd[1511]: time="2026-04-23T23:13:34.700981600Z" level=info msg="Start recovering state" Apr 23 23:13:34.701669 containerd[1511]: time="2026-04-23T23:13:34.701078120Z" level=info msg="Start event monitor" Apr 23 23:13:34.701669 containerd[1511]: time="2026-04-23T23:13:34.701093920Z" level=info msg="Start cni network conf syncer for default" Apr 23 23:13:34.701669 containerd[1511]: time="2026-04-23T23:13:34.701103720Z" level=info msg="Start streaming server" Apr 23 23:13:34.701669 containerd[1511]: time="2026-04-23T23:13:34.701130200Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 23 23:13:34.701669 containerd[1511]: time="2026-04-23T23:13:34.701139520Z" level=info msg="runtime interface starting up..." Apr 23 23:13:34.701669 containerd[1511]: time="2026-04-23T23:13:34.701145640Z" level=info msg="starting plugins..." Apr 23 23:13:34.701669 containerd[1511]: time="2026-04-23T23:13:34.701169640Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 23 23:13:34.701669 containerd[1511]: time="2026-04-23T23:13:34.701212120Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 23 23:13:34.701669 containerd[1511]: time="2026-04-23T23:13:34.701264000Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 23 23:13:34.701669 containerd[1511]: time="2026-04-23T23:13:34.701327560Z" level=info msg="containerd successfully booted in 0.267422s" Apr 23 23:13:34.701414 systemd[1]: Started containerd.service - containerd container runtime. Apr 23 23:13:34.718308 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 23 23:13:34.726889 systemd-networkd[1430]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 23 23:13:34.733205 locksmithd[1521]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 23 23:13:34.771723 kernel: mousedev: PS/2 mouse device common for all mice Apr 23 23:13:34.815979 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Apr 23 23:13:34.820028 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 23 23:13:34.901734 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Apr 23 23:13:34.901801 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 23 23:13:34.901813 kernel: [drm] features: -context_init Apr 23 23:13:34.906721 kernel: [drm] number of scanouts: 1 Apr 23 23:13:34.906782 kernel: [drm] number of cap sets: 0 Apr 23 23:13:34.913734 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Apr 23 23:13:34.942997 kernel: Console: switching to colour frame buffer device 160x50 Apr 23 23:13:34.998935 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 23 23:13:35.043089 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 23 23:13:35.065975 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 23 23:13:35.066184 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 23 23:13:35.069679 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 23 23:13:35.087206 systemd-logind[1483]: Watching system buttons on /dev/input/event0 (Power Button) Apr 23 23:13:35.089765 systemd-logind[1483]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Apr 23 23:13:35.185714 coreos-metadata[1474]: Apr 23 23:13:35.185 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #2 Apr 23 23:13:35.186318 coreos-metadata[1474]: Apr 23 23:13:35.186 INFO Fetch successful Apr 23 23:13:35.186318 coreos-metadata[1474]: Apr 23 23:13:35.186 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 23 23:13:35.186647 coreos-metadata[1474]: Apr 23 23:13:35.186 INFO Fetch successful Apr 23 23:13:35.223782 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 23 23:13:35.277483 tar[1500]: linux-arm64/README.md Apr 23 23:13:35.306831 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 23 23:13:35.319176 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 23 23:13:35.321085 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 23 23:13:35.324609 sshd_keygen[1506]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 23 23:13:35.348239 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 23 23:13:35.351854 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 23 23:13:35.368360 systemd[1]: issuegen.service: Deactivated successfully. Apr 23 23:13:35.368612 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 23 23:13:35.371265 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 23 23:13:35.398597 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 23 23:13:35.403549 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 23 23:13:35.406949 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Apr 23 23:13:35.407720 systemd[1]: Reached target getty.target - Login Prompts. Apr 23 23:13:35.501932 systemd-networkd[1430]: eth0: Gained IPv6LL Apr 23 23:13:35.507726 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 23 23:13:35.509932 systemd[1]: Reached target network-online.target - Network is Online. Apr 23 23:13:35.513201 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 23 23:13:35.517963 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 23 23:13:35.548590 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 23 23:13:35.885874 systemd-networkd[1430]: eth1: Gained IPv6LL Apr 23 23:13:36.265563 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 23 23:13:36.267498 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 23 23:13:36.269899 systemd[1]: Startup finished in 2.349s (kernel) + 5.285s (initrd) + 4.428s (userspace) = 12.064s. Apr 23 23:13:36.281273 (kubelet)[1660]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 23 23:13:36.734026 kubelet[1660]: E0423 23:13:36.733858 1660 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 23 23:13:36.738741 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 23 23:13:36.739040 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 23 23:13:36.739935 systemd[1]: kubelet.service: Consumed 831ms CPU time, 248.2M memory peak. Apr 23 23:13:46.989761 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 23 23:13:46.993230 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 23 23:13:47.143164 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 23 23:13:47.154572 (kubelet)[1679]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 23 23:13:47.199847 kubelet[1679]: E0423 23:13:47.199744 1679 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 23 23:13:47.204914 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 23 23:13:47.205055 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 23 23:13:47.206916 systemd[1]: kubelet.service: Consumed 165ms CPU time, 105.1M memory peak. Apr 23 23:13:57.456106 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 23 23:13:57.459920 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 23 23:13:57.611763 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 23 23:13:57.626325 (kubelet)[1693]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 23 23:13:57.669311 kubelet[1693]: E0423 23:13:57.669262 1693 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 23 23:13:57.672578 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 23 23:13:57.672756 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 23 23:13:57.673937 systemd[1]: kubelet.service: Consumed 156ms CPU time, 107.2M memory peak. Apr 23 23:14:07.795861 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 23 23:14:07.799450 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 23 23:14:07.970880 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 23 23:14:07.985316 (kubelet)[1708]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 23 23:14:08.033404 kubelet[1708]: E0423 23:14:08.033201 1708 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 23 23:14:08.037134 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 23 23:14:08.037446 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 23 23:14:08.037949 systemd[1]: kubelet.service: Consumed 166ms CPU time, 107.4M memory peak. Apr 23 23:14:11.988176 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 23 23:14:11.990059 systemd[1]: Started sshd@0-138.199.150.149:22-50.85.169.122:48566.service - OpenSSH per-connection server daemon (50.85.169.122:48566). Apr 23 23:14:12.135741 sshd[1716]: Accepted publickey for core from 50.85.169.122 port 48566 ssh2: RSA SHA256:Tz0dqMPsdf8xUb4jUaTJqqr7RT+Ihh1eVJlUIJQ/qIM Apr 23 23:14:12.137795 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:14:12.150406 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 23 23:14:12.151802 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 23 23:14:12.154794 systemd-logind[1483]: New session 1 of user core. Apr 23 23:14:12.182611 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 23 23:14:12.188993 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 23 23:14:12.206834 (systemd)[1721]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 23 23:14:12.210921 systemd-logind[1483]: New session c1 of user core. Apr 23 23:14:12.337626 systemd[1721]: Queued start job for default target default.target. Apr 23 23:14:12.349780 systemd[1721]: Created slice app.slice - User Application Slice. Apr 23 23:14:12.349832 systemd[1721]: Reached target paths.target - Paths. Apr 23 23:14:12.349903 systemd[1721]: Reached target timers.target - Timers. Apr 23 23:14:12.352274 systemd[1721]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 23 23:14:12.366290 systemd[1721]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 23 23:14:12.366437 systemd[1721]: Reached target sockets.target - Sockets. Apr 23 23:14:12.366501 systemd[1721]: Reached target basic.target - Basic System. Apr 23 23:14:12.366559 systemd[1721]: Reached target default.target - Main User Target. Apr 23 23:14:12.366600 systemd[1721]: Startup finished in 149ms. Apr 23 23:14:12.366928 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 23 23:14:12.380057 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 23 23:14:12.446986 systemd[1]: Started sshd@1-138.199.150.149:22-50.85.169.122:48578.service - OpenSSH per-connection server daemon (50.85.169.122:48578). Apr 23 23:14:12.580663 sshd[1732]: Accepted publickey for core from 50.85.169.122 port 48578 ssh2: RSA SHA256:Tz0dqMPsdf8xUb4jUaTJqqr7RT+Ihh1eVJlUIJQ/qIM Apr 23 23:14:12.582925 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:14:12.589423 systemd-logind[1483]: New session 2 of user core. Apr 23 23:14:12.600148 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 23 23:14:12.647023 sshd[1735]: Connection closed by 50.85.169.122 port 48578 Apr 23 23:14:12.648115 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Apr 23 23:14:12.654787 systemd-logind[1483]: Session 2 logged out. Waiting for processes to exit. Apr 23 23:14:12.655516 systemd[1]: sshd@1-138.199.150.149:22-50.85.169.122:48578.service: Deactivated successfully. Apr 23 23:14:12.657580 systemd[1]: session-2.scope: Deactivated successfully. Apr 23 23:14:12.661334 systemd-logind[1483]: Removed session 2. Apr 23 23:14:12.683605 systemd[1]: Started sshd@2-138.199.150.149:22-50.85.169.122:48586.service - OpenSSH per-connection server daemon (50.85.169.122:48586). Apr 23 23:14:12.817770 sshd[1741]: Accepted publickey for core from 50.85.169.122 port 48586 ssh2: RSA SHA256:Tz0dqMPsdf8xUb4jUaTJqqr7RT+Ihh1eVJlUIJQ/qIM Apr 23 23:14:12.819889 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:14:12.826667 systemd-logind[1483]: New session 3 of user core. Apr 23 23:14:12.838043 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 23 23:14:12.880021 sshd[1744]: Connection closed by 50.85.169.122 port 48586 Apr 23 23:14:12.882977 sshd-session[1741]: pam_unix(sshd:session): session closed for user core Apr 23 23:14:12.888117 systemd-logind[1483]: Session 3 logged out. Waiting for processes to exit. Apr 23 23:14:12.889188 systemd[1]: sshd@2-138.199.150.149:22-50.85.169.122:48586.service: Deactivated successfully. Apr 23 23:14:12.891457 systemd[1]: session-3.scope: Deactivated successfully. Apr 23 23:14:12.893620 systemd-logind[1483]: Removed session 3. Apr 23 23:14:12.912150 systemd[1]: Started sshd@3-138.199.150.149:22-50.85.169.122:48590.service - OpenSSH per-connection server daemon (50.85.169.122:48590). Apr 23 23:14:13.041803 sshd[1750]: Accepted publickey for core from 50.85.169.122 port 48590 ssh2: RSA SHA256:Tz0dqMPsdf8xUb4jUaTJqqr7RT+Ihh1eVJlUIJQ/qIM Apr 23 23:14:13.044443 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:14:13.050675 systemd-logind[1483]: New session 4 of user core. Apr 23 23:14:13.056041 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 23 23:14:13.103736 sshd[1753]: Connection closed by 50.85.169.122 port 48590 Apr 23 23:14:13.104529 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Apr 23 23:14:13.109350 systemd[1]: sshd@3-138.199.150.149:22-50.85.169.122:48590.service: Deactivated successfully. Apr 23 23:14:13.112276 systemd[1]: session-4.scope: Deactivated successfully. Apr 23 23:14:13.114747 systemd-logind[1483]: Session 4 logged out. Waiting for processes to exit. Apr 23 23:14:13.116618 systemd-logind[1483]: Removed session 4. Apr 23 23:14:13.132973 systemd[1]: Started sshd@4-138.199.150.149:22-50.85.169.122:48594.service - OpenSSH per-connection server daemon (50.85.169.122:48594). Apr 23 23:14:13.255325 sshd[1759]: Accepted publickey for core from 50.85.169.122 port 48594 ssh2: RSA SHA256:Tz0dqMPsdf8xUb4jUaTJqqr7RT+Ihh1eVJlUIJQ/qIM Apr 23 23:14:13.257194 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:14:13.263796 systemd-logind[1483]: New session 5 of user core. Apr 23 23:14:13.268963 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 23 23:14:13.306582 sudo[1763]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 23 23:14:13.306886 sudo[1763]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 23 23:14:13.316955 sudo[1763]: pam_unix(sudo:session): session closed for user root Apr 23 23:14:13.332444 sshd[1762]: Connection closed by 50.85.169.122 port 48594 Apr 23 23:14:13.333648 sshd-session[1759]: pam_unix(sshd:session): session closed for user core Apr 23 23:14:13.339933 systemd-logind[1483]: Session 5 logged out. Waiting for processes to exit. Apr 23 23:14:13.340228 systemd[1]: sshd@4-138.199.150.149:22-50.85.169.122:48594.service: Deactivated successfully. Apr 23 23:14:13.342303 systemd[1]: session-5.scope: Deactivated successfully. Apr 23 23:14:13.344267 systemd-logind[1483]: Removed session 5. Apr 23 23:14:13.358982 systemd[1]: Started sshd@5-138.199.150.149:22-50.85.169.122:48596.service - OpenSSH per-connection server daemon (50.85.169.122:48596). Apr 23 23:14:13.491160 sshd[1769]: Accepted publickey for core from 50.85.169.122 port 48596 ssh2: RSA SHA256:Tz0dqMPsdf8xUb4jUaTJqqr7RT+Ihh1eVJlUIJQ/qIM Apr 23 23:14:13.493071 sshd-session[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:14:13.500319 systemd-logind[1483]: New session 6 of user core. Apr 23 23:14:13.506079 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 23 23:14:13.538037 sudo[1774]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 23 23:14:13.538338 sudo[1774]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 23 23:14:13.543844 sudo[1774]: pam_unix(sudo:session): session closed for user root Apr 23 23:14:13.549887 sudo[1773]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 23 23:14:13.550184 sudo[1773]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 23 23:14:13.565513 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 23 23:14:13.612893 augenrules[1796]: No rules Apr 23 23:14:13.614532 systemd[1]: audit-rules.service: Deactivated successfully. Apr 23 23:14:13.615871 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 23 23:14:13.618436 sudo[1773]: pam_unix(sudo:session): session closed for user root Apr 23 23:14:13.634331 sshd[1772]: Connection closed by 50.85.169.122 port 48596 Apr 23 23:14:13.634832 sshd-session[1769]: pam_unix(sshd:session): session closed for user core Apr 23 23:14:13.641740 systemd-logind[1483]: Session 6 logged out. Waiting for processes to exit. Apr 23 23:14:13.641950 systemd[1]: sshd@5-138.199.150.149:22-50.85.169.122:48596.service: Deactivated successfully. Apr 23 23:14:13.643807 systemd[1]: session-6.scope: Deactivated successfully. Apr 23 23:14:13.645498 systemd-logind[1483]: Removed session 6. Apr 23 23:14:13.665125 systemd[1]: Started sshd@6-138.199.150.149:22-50.85.169.122:48604.service - OpenSSH per-connection server daemon (50.85.169.122:48604). Apr 23 23:14:13.794294 sshd[1805]: Accepted publickey for core from 50.85.169.122 port 48604 ssh2: RSA SHA256:Tz0dqMPsdf8xUb4jUaTJqqr7RT+Ihh1eVJlUIJQ/qIM Apr 23 23:14:13.795937 sshd-session[1805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:14:13.803334 systemd-logind[1483]: New session 7 of user core. Apr 23 23:14:13.815081 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 23 23:14:13.843983 sudo[1809]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 23 23:14:13.844591 sudo[1809]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 23 23:14:14.170004 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 23 23:14:14.179382 (dockerd)[1826]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 23 23:14:14.407442 dockerd[1826]: time="2026-04-23T23:14:14.407351573Z" level=info msg="Starting up" Apr 23 23:14:14.410112 dockerd[1826]: time="2026-04-23T23:14:14.410040703Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 23 23:14:14.425134 dockerd[1826]: time="2026-04-23T23:14:14.424997774Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 23 23:14:14.445883 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1985637202-merged.mount: Deactivated successfully. Apr 23 23:14:14.468543 systemd[1]: var-lib-docker-metacopy\x2dcheck323311732-merged.mount: Deactivated successfully. Apr 23 23:14:14.476492 dockerd[1826]: time="2026-04-23T23:14:14.476453596Z" level=info msg="Loading containers: start." Apr 23 23:14:14.487728 kernel: Initializing XFRM netlink socket Apr 23 23:14:14.738764 systemd-networkd[1430]: docker0: Link UP Apr 23 23:14:14.744313 dockerd[1826]: time="2026-04-23T23:14:14.744192985Z" level=info msg="Loading containers: done." Apr 23 23:14:14.765189 dockerd[1826]: time="2026-04-23T23:14:14.764794713Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 23 23:14:14.765189 dockerd[1826]: time="2026-04-23T23:14:14.764886232Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Apr 23 23:14:14.765189 dockerd[1826]: time="2026-04-23T23:14:14.764967431Z" level=info msg="Initializing buildkit" Apr 23 23:14:14.791137 dockerd[1826]: time="2026-04-23T23:14:14.791084138Z" level=info msg="Completed buildkit initialization" Apr 23 23:14:14.801341 dockerd[1826]: time="2026-04-23T23:14:14.801297103Z" level=info msg="Daemon has completed initialization" Apr 23 23:14:14.801573 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 23 23:14:14.802591 dockerd[1826]: time="2026-04-23T23:14:14.802429930Z" level=info msg="API listen on /run/docker.sock" Apr 23 23:14:15.271089 containerd[1511]: time="2026-04-23T23:14:15.270996660Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\"" Apr 23 23:14:15.441682 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck782801023-merged.mount: Deactivated successfully. Apr 23 23:14:15.848092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4283134395.mount: Deactivated successfully. Apr 23 23:14:16.617354 containerd[1511]: time="2026-04-23T23:14:16.617297971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:16.618400 containerd[1511]: time="2026-04-23T23:14:16.618347240Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.7: active requests=0, bytes read=24193866" Apr 23 23:14:16.619181 containerd[1511]: time="2026-04-23T23:14:16.619131912Z" level=info msg="ImageCreate event name:\"sha256:bf3fdee5548e267fd53c67a79d712e896d47f48203512415518d59da7f985228\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:16.622309 containerd[1511]: time="2026-04-23T23:14:16.622279761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:16.623731 containerd[1511]: time="2026-04-23T23:14:16.623360550Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.7\" with image id \"sha256:bf3fdee5548e267fd53c67a79d712e896d47f48203512415518d59da7f985228\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\", size \"24190367\" in 1.352283491s" Apr 23 23:14:16.623731 containerd[1511]: time="2026-04-23T23:14:16.623397189Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\" returns image reference \"sha256:bf3fdee5548e267fd53c67a79d712e896d47f48203512415518d59da7f985228\"" Apr 23 23:14:16.624214 containerd[1511]: time="2026-04-23T23:14:16.624176381Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\"" Apr 23 23:14:17.599001 containerd[1511]: time="2026-04-23T23:14:17.598069567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:17.599167 containerd[1511]: time="2026-04-23T23:14:17.599147956Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.7: active requests=0, bytes read=18901464" Apr 23 23:14:17.600145 containerd[1511]: time="2026-04-23T23:14:17.600113947Z" level=info msg="ImageCreate event name:\"sha256:161b12aee2701d72b2e8a7d114f5f83122603d8c5d1d3cd7f72aa6fac5d9524c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:17.603198 containerd[1511]: time="2026-04-23T23:14:17.603169318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:17.604125 containerd[1511]: time="2026-04-23T23:14:17.604087149Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.7\" with image id \"sha256:161b12aee2701d72b2e8a7d114f5f83122603d8c5d1d3cd7f72aa6fac5d9524c\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\", size \"20408083\" in 979.863248ms" Apr 23 23:14:17.604125 containerd[1511]: time="2026-04-23T23:14:17.604123909Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\" returns image reference \"sha256:161b12aee2701d72b2e8a7d114f5f83122603d8c5d1d3cd7f72aa6fac5d9524c\"" Apr 23 23:14:17.604699 containerd[1511]: time="2026-04-23T23:14:17.604673183Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\"" Apr 23 23:14:18.046115 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 23 23:14:18.049940 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 23 23:14:18.209249 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 23 23:14:18.220065 (kubelet)[2112]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 23 23:14:18.272060 kubelet[2112]: E0423 23:14:18.271969 2112 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 23 23:14:18.276905 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 23 23:14:18.277161 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 23 23:14:18.277578 systemd[1]: kubelet.service: Consumed 159ms CPU time, 106.8M memory peak. Apr 23 23:14:18.468804 containerd[1511]: time="2026-04-23T23:14:18.468125168Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:18.469657 containerd[1511]: time="2026-04-23T23:14:18.469631795Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.7: active requests=0, bytes read=14047965" Apr 23 23:14:18.470646 containerd[1511]: time="2026-04-23T23:14:18.470623386Z" level=info msg="ImageCreate event name:\"sha256:85bc0b83d6779f309f0f2d8724ee225e2a061dc60b1b127f8a9b8843bad36e14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:18.473851 containerd[1511]: time="2026-04-23T23:14:18.473389601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:18.474594 containerd[1511]: time="2026-04-23T23:14:18.474466311Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.7\" with image id \"sha256:85bc0b83d6779f309f0f2d8724ee225e2a061dc60b1b127f8a9b8843bad36e14\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\", size \"15554602\" in 869.762088ms" Apr 23 23:14:18.474594 containerd[1511]: time="2026-04-23T23:14:18.474497710Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\" returns image reference \"sha256:85bc0b83d6779f309f0f2d8724ee225e2a061dc60b1b127f8a9b8843bad36e14\"" Apr 23 23:14:18.477727 containerd[1511]: time="2026-04-23T23:14:18.477635002Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\"" Apr 23 23:14:19.210438 update_engine[1485]: I20260423 23:14:19.210333 1485 update_attempter.cc:509] Updating boot flags... Apr 23 23:14:19.367959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1613683397.mount: Deactivated successfully. Apr 23 23:14:19.568195 containerd[1511]: time="2026-04-23T23:14:19.568134449Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:19.570093 containerd[1511]: time="2026-04-23T23:14:19.570028072Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.7: active requests=0, bytes read=22606312" Apr 23 23:14:19.571022 containerd[1511]: time="2026-04-23T23:14:19.570973584Z" level=info msg="ImageCreate event name:\"sha256:c63683691df94ddfb3e7b1449f68fd9df087b1bda7cdecd1e9292214f6adc745\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:19.574032 containerd[1511]: time="2026-04-23T23:14:19.573978198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:19.574521 containerd[1511]: time="2026-04-23T23:14:19.574365835Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.7\" with image id \"sha256:c63683691df94ddfb3e7b1449f68fd9df087b1bda7cdecd1e9292214f6adc745\", repo tag \"registry.k8s.io/kube-proxy:v1.34.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\", size \"22605305\" in 1.096678834s" Apr 23 23:14:19.574521 containerd[1511]: time="2026-04-23T23:14:19.574395034Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\" returns image reference \"sha256:c63683691df94ddfb3e7b1449f68fd9df087b1bda7cdecd1e9292214f6adc745\"" Apr 23 23:14:19.574998 containerd[1511]: time="2026-04-23T23:14:19.574965550Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 23 23:14:20.043853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount42854401.mount: Deactivated successfully. Apr 23 23:14:20.831743 containerd[1511]: time="2026-04-23T23:14:20.830700195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:20.833324 containerd[1511]: time="2026-04-23T23:14:20.833297694Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395498" Apr 23 23:14:20.834569 containerd[1511]: time="2026-04-23T23:14:20.834509404Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:20.838315 containerd[1511]: time="2026-04-23T23:14:20.838269733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:20.839830 containerd[1511]: time="2026-04-23T23:14:20.839777241Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.264777012s" Apr 23 23:14:20.839830 containerd[1511]: time="2026-04-23T23:14:20.839822600Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Apr 23 23:14:20.840515 containerd[1511]: time="2026-04-23T23:14:20.840477755Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 23 23:14:21.279018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3374276719.mount: Deactivated successfully. Apr 23 23:14:21.284833 containerd[1511]: time="2026-04-23T23:14:21.284756132Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:21.286250 containerd[1511]: time="2026-04-23T23:14:21.286192361Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268729" Apr 23 23:14:21.287308 containerd[1511]: time="2026-04-23T23:14:21.287248872Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:21.290583 containerd[1511]: time="2026-04-23T23:14:21.290519447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:21.291158 containerd[1511]: time="2026-04-23T23:14:21.290993883Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 450.363089ms" Apr 23 23:14:21.291158 containerd[1511]: time="2026-04-23T23:14:21.291025643Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Apr 23 23:14:21.291653 containerd[1511]: time="2026-04-23T23:14:21.291622918Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 23 23:14:21.757986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1750768107.mount: Deactivated successfully. Apr 23 23:14:22.444078 containerd[1511]: time="2026-04-23T23:14:22.444015942Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:22.445506 containerd[1511]: time="2026-04-23T23:14:22.445309533Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=21139756" Apr 23 23:14:22.446495 containerd[1511]: time="2026-04-23T23:14:22.446461044Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:22.449552 containerd[1511]: time="2026-04-23T23:14:22.449517381Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:22.450776 containerd[1511]: time="2026-04-23T23:14:22.450743612Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"21136588\" in 1.159087774s" Apr 23 23:14:22.450972 containerd[1511]: time="2026-04-23T23:14:22.450879051Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\"" Apr 23 23:14:28.212441 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 23 23:14:28.213081 systemd[1]: kubelet.service: Consumed 159ms CPU time, 106.8M memory peak. Apr 23 23:14:28.215765 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 23 23:14:28.251304 systemd[1]: Reload requested from client PID 2290 ('systemctl') (unit session-7.scope)... Apr 23 23:14:28.251321 systemd[1]: Reloading... Apr 23 23:14:28.362797 zram_generator::config[2334]: No configuration found. Apr 23 23:14:28.554165 systemd[1]: Reloading finished in 302 ms. Apr 23 23:14:28.606285 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 23 23:14:28.606512 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 23 23:14:28.606975 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 23 23:14:28.607104 systemd[1]: kubelet.service: Consumed 102ms CPU time, 94.9M memory peak. Apr 23 23:14:28.608906 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 23 23:14:28.757217 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 23 23:14:28.769880 (kubelet)[2382]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 23 23:14:28.811044 kubelet[2382]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 23 23:14:28.811044 kubelet[2382]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 23 23:14:28.811044 kubelet[2382]: I0423 23:14:28.810836 2382 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 23 23:14:29.868474 kubelet[2382]: I0423 23:14:29.868406 2382 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 23 23:14:29.868474 kubelet[2382]: I0423 23:14:29.868458 2382 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 23 23:14:29.868914 kubelet[2382]: I0423 23:14:29.868503 2382 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 23 23:14:29.868914 kubelet[2382]: I0423 23:14:29.868516 2382 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 23 23:14:29.869107 kubelet[2382]: I0423 23:14:29.869053 2382 server.go:956] "Client rotation is on, will bootstrap in background" Apr 23 23:14:29.879739 kubelet[2382]: E0423 23:14:29.879656 2382 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://138.199.150.149:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 138.199.150.149:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 23 23:14:29.880907 kubelet[2382]: I0423 23:14:29.880799 2382 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 23 23:14:29.884912 kubelet[2382]: I0423 23:14:29.884897 2382 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 23 23:14:29.887305 kubelet[2382]: I0423 23:14:29.887284 2382 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 23 23:14:29.887647 kubelet[2382]: I0423 23:14:29.887614 2382 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 23 23:14:29.888498 kubelet[2382]: I0423 23:14:29.888038 2382 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-4-n-08a122edc2","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 23 23:14:29.888498 kubelet[2382]: I0423 23:14:29.888203 2382 topology_manager.go:138] "Creating topology manager with none policy" Apr 23 23:14:29.888498 kubelet[2382]: I0423 23:14:29.888211 2382 container_manager_linux.go:306] "Creating device plugin manager" Apr 23 23:14:29.888498 kubelet[2382]: I0423 23:14:29.888324 2382 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 23 23:14:29.891123 kubelet[2382]: I0423 23:14:29.891083 2382 state_mem.go:36] "Initialized new in-memory state store" Apr 23 23:14:29.893820 kubelet[2382]: I0423 23:14:29.893789 2382 kubelet.go:475] "Attempting to sync node with API server" Apr 23 23:14:29.893960 kubelet[2382]: I0423 23:14:29.893944 2382 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 23 23:14:29.894520 kubelet[2382]: E0423 23:14:29.894474 2382 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://138.199.150.149:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-4-n-08a122edc2&limit=500&resourceVersion=0\": dial tcp 138.199.150.149:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 23 23:14:29.895182 kubelet[2382]: I0423 23:14:29.895158 2382 kubelet.go:387] "Adding apiserver pod source" Apr 23 23:14:29.895309 kubelet[2382]: I0423 23:14:29.895292 2382 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 23 23:14:29.897024 kubelet[2382]: E0423 23:14:29.896988 2382 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://138.199.150.149:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 138.199.150.149:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 23 23:14:29.899770 kubelet[2382]: I0423 23:14:29.897836 2382 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 23 23:14:29.899770 kubelet[2382]: I0423 23:14:29.898862 2382 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 23 23:14:29.899770 kubelet[2382]: I0423 23:14:29.898911 2382 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 23 23:14:29.899770 kubelet[2382]: W0423 23:14:29.898962 2382 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 23 23:14:29.902345 kubelet[2382]: I0423 23:14:29.902327 2382 server.go:1262] "Started kubelet" Apr 23 23:14:29.904834 kubelet[2382]: I0423 23:14:29.904800 2382 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 23 23:14:29.905602 kubelet[2382]: I0423 23:14:29.905578 2382 server.go:310] "Adding debug handlers to kubelet server" Apr 23 23:14:29.911239 kubelet[2382]: I0423 23:14:29.911198 2382 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 23 23:14:29.911982 kubelet[2382]: I0423 23:14:29.911924 2382 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 23 23:14:29.912099 kubelet[2382]: I0423 23:14:29.912085 2382 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 23 23:14:29.912303 kubelet[2382]: I0423 23:14:29.912290 2382 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 23 23:14:29.916456 kubelet[2382]: E0423 23:14:29.914888 2382 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://138.199.150.149:6443/api/v1/namespaces/default/events\": dial tcp 138.199.150.149:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-4-n-08a122edc2.18a91f6a6b5813ef default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-4-n-08a122edc2,UID:ci-4459-2-4-n-08a122edc2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-4-n-08a122edc2,},FirstTimestamp:2026-04-23 23:14:29.902300143 +0000 UTC m=+1.128371436,LastTimestamp:2026-04-23 23:14:29.902300143 +0000 UTC m=+1.128371436,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-4-n-08a122edc2,}" Apr 23 23:14:29.917936 kubelet[2382]: I0423 23:14:29.917361 2382 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 23 23:14:29.918344 kubelet[2382]: I0423 23:14:29.918326 2382 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 23 23:14:29.918630 kubelet[2382]: E0423 23:14:29.918606 2382 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-08a122edc2\" not found" Apr 23 23:14:29.920177 kubelet[2382]: E0423 23:14:29.920122 2382 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.150.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-08a122edc2?timeout=10s\": dial tcp 138.199.150.149:6443: connect: connection refused" interval="200ms" Apr 23 23:14:29.920571 kubelet[2382]: I0423 23:14:29.920536 2382 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 23 23:14:29.920905 kubelet[2382]: I0423 23:14:29.920874 2382 factory.go:223] Registration of the systemd container factory successfully Apr 23 23:14:29.920992 kubelet[2382]: I0423 23:14:29.920970 2382 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 23 23:14:29.922788 kubelet[2382]: E0423 23:14:29.922750 2382 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://138.199.150.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 138.199.150.149:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 23 23:14:29.923504 kubelet[2382]: I0423 23:14:29.923476 2382 factory.go:223] Registration of the containerd container factory successfully Apr 23 23:14:29.925396 kubelet[2382]: I0423 23:14:29.925260 2382 reconciler.go:29] "Reconciler: start to sync state" Apr 23 23:14:29.938001 kubelet[2382]: I0423 23:14:29.937949 2382 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 23 23:14:29.939161 kubelet[2382]: I0423 23:14:29.939139 2382 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 23 23:14:29.939256 kubelet[2382]: I0423 23:14:29.939244 2382 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 23 23:14:29.939372 kubelet[2382]: I0423 23:14:29.939360 2382 kubelet.go:2428] "Starting kubelet main sync loop" Apr 23 23:14:29.939529 kubelet[2382]: E0423 23:14:29.939498 2382 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 23 23:14:29.945195 kubelet[2382]: E0423 23:14:29.944412 2382 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 23 23:14:29.947600 kubelet[2382]: E0423 23:14:29.947563 2382 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://138.199.150.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 138.199.150.149:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 23 23:14:29.950150 kubelet[2382]: I0423 23:14:29.950126 2382 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 23 23:14:29.950150 kubelet[2382]: I0423 23:14:29.950143 2382 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 23 23:14:29.950239 kubelet[2382]: I0423 23:14:29.950160 2382 state_mem.go:36] "Initialized new in-memory state store" Apr 23 23:14:29.952391 kubelet[2382]: I0423 23:14:29.952372 2382 policy_none.go:49] "None policy: Start" Apr 23 23:14:29.952391 kubelet[2382]: I0423 23:14:29.952392 2382 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 23 23:14:29.952487 kubelet[2382]: I0423 23:14:29.952404 2382 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 23 23:14:29.953840 kubelet[2382]: I0423 23:14:29.953821 2382 policy_none.go:47] "Start" Apr 23 23:14:29.958189 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 23 23:14:29.969480 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 23 23:14:29.973627 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 23 23:14:29.983085 kubelet[2382]: E0423 23:14:29.983048 2382 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 23 23:14:29.983252 kubelet[2382]: I0423 23:14:29.983246 2382 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 23 23:14:29.983334 kubelet[2382]: I0423 23:14:29.983256 2382 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 23 23:14:29.984321 kubelet[2382]: I0423 23:14:29.984206 2382 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 23 23:14:29.987156 kubelet[2382]: E0423 23:14:29.987082 2382 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 23 23:14:29.987156 kubelet[2382]: E0423 23:14:29.987128 2382 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-2-4-n-08a122edc2\" not found" Apr 23 23:14:30.054792 systemd[1]: Created slice kubepods-burstable-poddca84f47fe669f7a22b3f62fed94d31c.slice - libcontainer container kubepods-burstable-poddca84f47fe669f7a22b3f62fed94d31c.slice. Apr 23 23:14:30.071346 kubelet[2382]: E0423 23:14:30.071069 2382 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-08a122edc2\" not found" node="ci-4459-2-4-n-08a122edc2" Apr 23 23:14:30.076948 systemd[1]: Created slice kubepods-burstable-pod62a9baf81a68cb45863cc50fb724b239.slice - libcontainer container kubepods-burstable-pod62a9baf81a68cb45863cc50fb724b239.slice. Apr 23 23:14:30.087839 kubelet[2382]: E0423 23:14:30.086758 2382 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-08a122edc2\" not found" node="ci-4459-2-4-n-08a122edc2" Apr 23 23:14:30.088476 kubelet[2382]: I0423 23:14:30.088448 2382 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-08a122edc2" Apr 23 23:14:30.089045 kubelet[2382]: E0423 23:14:30.089011 2382 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://138.199.150.149:6443/api/v1/nodes\": dial tcp 138.199.150.149:6443: connect: connection refused" node="ci-4459-2-4-n-08a122edc2" Apr 23 23:14:30.091393 systemd[1]: Created slice kubepods-burstable-podf1412923ac2b92ca293cf970e44a529c.slice - libcontainer container kubepods-burstable-podf1412923ac2b92ca293cf970e44a529c.slice. Apr 23 23:14:30.093875 kubelet[2382]: E0423 23:14:30.093850 2382 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-08a122edc2\" not found" node="ci-4459-2-4-n-08a122edc2" Apr 23 23:14:30.121030 kubelet[2382]: E0423 23:14:30.120880 2382 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.150.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-08a122edc2?timeout=10s\": dial tcp 138.199.150.149:6443: connect: connection refused" interval="400ms" Apr 23 23:14:30.126497 kubelet[2382]: I0423 23:14:30.126440 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dca84f47fe669f7a22b3f62fed94d31c-k8s-certs\") pod \"kube-apiserver-ci-4459-2-4-n-08a122edc2\" (UID: \"dca84f47fe669f7a22b3f62fed94d31c\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-08a122edc2" Apr 23 23:14:30.126497 kubelet[2382]: I0423 23:14:30.126493 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/62a9baf81a68cb45863cc50fb724b239-ca-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-08a122edc2\" (UID: \"62a9baf81a68cb45863cc50fb724b239\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-08a122edc2" Apr 23 23:14:30.126497 kubelet[2382]: I0423 23:14:30.126514 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/62a9baf81a68cb45863cc50fb724b239-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-4-n-08a122edc2\" (UID: \"62a9baf81a68cb45863cc50fb724b239\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-08a122edc2" Apr 23 23:14:30.126784 kubelet[2382]: I0423 23:14:30.126528 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/62a9baf81a68cb45863cc50fb724b239-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-4-n-08a122edc2\" (UID: \"62a9baf81a68cb45863cc50fb724b239\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-08a122edc2" Apr 23 23:14:30.126784 kubelet[2382]: I0423 23:14:30.126545 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dca84f47fe669f7a22b3f62fed94d31c-ca-certs\") pod \"kube-apiserver-ci-4459-2-4-n-08a122edc2\" (UID: \"dca84f47fe669f7a22b3f62fed94d31c\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-08a122edc2" Apr 23 23:14:30.126784 kubelet[2382]: I0423 23:14:30.126569 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dca84f47fe669f7a22b3f62fed94d31c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-4-n-08a122edc2\" (UID: \"dca84f47fe669f7a22b3f62fed94d31c\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-08a122edc2" Apr 23 23:14:30.126784 kubelet[2382]: I0423 23:14:30.126583 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/62a9baf81a68cb45863cc50fb724b239-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-4-n-08a122edc2\" (UID: \"62a9baf81a68cb45863cc50fb724b239\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-08a122edc2" Apr 23 23:14:30.126784 kubelet[2382]: I0423 23:14:30.126598 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/62a9baf81a68cb45863cc50fb724b239-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-08a122edc2\" (UID: \"62a9baf81a68cb45863cc50fb724b239\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-08a122edc2" Apr 23 23:14:30.126911 kubelet[2382]: I0423 23:14:30.126629 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f1412923ac2b92ca293cf970e44a529c-kubeconfig\") pod \"kube-scheduler-ci-4459-2-4-n-08a122edc2\" (UID: \"f1412923ac2b92ca293cf970e44a529c\") " pod="kube-system/kube-scheduler-ci-4459-2-4-n-08a122edc2" Apr 23 23:14:30.292612 kubelet[2382]: I0423 23:14:30.292367 2382 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-08a122edc2" Apr 23 23:14:30.293581 kubelet[2382]: E0423 23:14:30.292942 2382 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://138.199.150.149:6443/api/v1/nodes\": dial tcp 138.199.150.149:6443: connect: connection refused" node="ci-4459-2-4-n-08a122edc2" Apr 23 23:14:30.376428 containerd[1511]: time="2026-04-23T23:14:30.375859938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-4-n-08a122edc2,Uid:dca84f47fe669f7a22b3f62fed94d31c,Namespace:kube-system,Attempt:0,}" Apr 23 23:14:30.391412 containerd[1511]: time="2026-04-23T23:14:30.391312978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-4-n-08a122edc2,Uid:62a9baf81a68cb45863cc50fb724b239,Namespace:kube-system,Attempt:0,}" Apr 23 23:14:30.396742 containerd[1511]: time="2026-04-23T23:14:30.396474151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-4-n-08a122edc2,Uid:f1412923ac2b92ca293cf970e44a529c,Namespace:kube-system,Attempt:0,}" Apr 23 23:14:30.522784 kubelet[2382]: E0423 23:14:30.522729 2382 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.150.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-08a122edc2?timeout=10s\": dial tcp 138.199.150.149:6443: connect: connection refused" interval="800ms" Apr 23 23:14:30.696548 kubelet[2382]: I0423 23:14:30.695819 2382 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-08a122edc2" Apr 23 23:14:30.696548 kubelet[2382]: E0423 23:14:30.696197 2382 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://138.199.150.149:6443/api/v1/nodes\": dial tcp 138.199.150.149:6443: connect: connection refused" node="ci-4459-2-4-n-08a122edc2" Apr 23 23:14:30.893362 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3865737621.mount: Deactivated successfully. Apr 23 23:14:30.900257 containerd[1511]: time="2026-04-23T23:14:30.900205175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 23 23:14:30.901939 containerd[1511]: time="2026-04-23T23:14:30.901904926Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Apr 23 23:14:30.905039 containerd[1511]: time="2026-04-23T23:14:30.904977790Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 23 23:14:30.907402 containerd[1511]: time="2026-04-23T23:14:30.907292858Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 23 23:14:30.909199 containerd[1511]: time="2026-04-23T23:14:30.909174568Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 23 23:14:30.910197 containerd[1511]: time="2026-04-23T23:14:30.909933684Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 23 23:14:30.910197 containerd[1511]: time="2026-04-23T23:14:30.910100444Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 23 23:14:30.912761 containerd[1511]: time="2026-04-23T23:14:30.912731270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 23 23:14:30.914767 containerd[1511]: time="2026-04-23T23:14:30.913537106Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 534.638104ms" Apr 23 23:14:30.916149 containerd[1511]: time="2026-04-23T23:14:30.916007053Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 522.814765ms" Apr 23 23:14:30.920208 containerd[1511]: time="2026-04-23T23:14:30.920177071Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 521.998409ms" Apr 23 23:14:30.955031 containerd[1511]: time="2026-04-23T23:14:30.954469933Z" level=info msg="connecting to shim e61b4625be277cdebf632c9e1ef3920c64874ee339c418f527501d969e3ceefa" address="unix:///run/containerd/s/e1958d69945ef5eaf3b85e0412d65ee827874803d1294ba8218a800617ad536f" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:14:30.961744 containerd[1511]: time="2026-04-23T23:14:30.961606616Z" level=info msg="connecting to shim ef74ba565af5bb5861e805b20646ef99dcf0648a58dd0d1feba189528e5c4e6c" address="unix:///run/containerd/s/9912e71f37889ab32ec3b0c5c575963e525bed013e64a3aaaa64699cbbeb0e6c" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:14:30.965794 kubelet[2382]: E0423 23:14:30.965304 2382 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://138.199.150.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 138.199.150.149:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 23 23:14:30.970491 containerd[1511]: time="2026-04-23T23:14:30.970423730Z" level=info msg="connecting to shim e4dd57737d6abd81426f90d819a3a61157faf5ff2a18b608c1767833a56f3ee6" address="unix:///run/containerd/s/bcc06808f9850123a58b7809ab636e32f41459bd8c2aaeb01fba7af034c6725b" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:14:30.989884 systemd[1]: Started cri-containerd-e61b4625be277cdebf632c9e1ef3920c64874ee339c418f527501d969e3ceefa.scope - libcontainer container e61b4625be277cdebf632c9e1ef3920c64874ee339c418f527501d969e3ceefa. Apr 23 23:14:31.000919 systemd[1]: Started cri-containerd-ef74ba565af5bb5861e805b20646ef99dcf0648a58dd0d1feba189528e5c4e6c.scope - libcontainer container ef74ba565af5bb5861e805b20646ef99dcf0648a58dd0d1feba189528e5c4e6c. Apr 23 23:14:31.011432 systemd[1]: Started cri-containerd-e4dd57737d6abd81426f90d819a3a61157faf5ff2a18b608c1767833a56f3ee6.scope - libcontainer container e4dd57737d6abd81426f90d819a3a61157faf5ff2a18b608c1767833a56f3ee6. Apr 23 23:14:31.020060 kubelet[2382]: E0423 23:14:31.019831 2382 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://138.199.150.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 138.199.150.149:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 23 23:14:31.064073 containerd[1511]: time="2026-04-23T23:14:31.063965818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-4-n-08a122edc2,Uid:dca84f47fe669f7a22b3f62fed94d31c,Namespace:kube-system,Attempt:0,} returns sandbox id \"e61b4625be277cdebf632c9e1ef3920c64874ee339c418f527501d969e3ceefa\"" Apr 23 23:14:31.073984 containerd[1511]: time="2026-04-23T23:14:31.073226292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-4-n-08a122edc2,Uid:62a9baf81a68cb45863cc50fb724b239,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef74ba565af5bb5861e805b20646ef99dcf0648a58dd0d1feba189528e5c4e6c\"" Apr 23 23:14:31.077185 containerd[1511]: time="2026-04-23T23:14:31.077148032Z" level=info msg="CreateContainer within sandbox \"e61b4625be277cdebf632c9e1ef3920c64874ee339c418f527501d969e3ceefa\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 23 23:14:31.084746 containerd[1511]: time="2026-04-23T23:14:31.084678795Z" level=info msg="CreateContainer within sandbox \"ef74ba565af5bb5861e805b20646ef99dcf0648a58dd0d1feba189528e5c4e6c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 23 23:14:31.088206 containerd[1511]: time="2026-04-23T23:14:31.088163897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-4-n-08a122edc2,Uid:f1412923ac2b92ca293cf970e44a529c,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4dd57737d6abd81426f90d819a3a61157faf5ff2a18b608c1767833a56f3ee6\"" Apr 23 23:14:31.093087 containerd[1511]: time="2026-04-23T23:14:31.092711355Z" level=info msg="CreateContainer within sandbox \"e4dd57737d6abd81426f90d819a3a61157faf5ff2a18b608c1767833a56f3ee6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 23 23:14:31.103144 containerd[1511]: time="2026-04-23T23:14:31.103076503Z" level=info msg="Container c1cb7eb991dca483b5006d3d26e201a35d540874040925ab3732aabb4bfdc07d: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:14:31.106162 containerd[1511]: time="2026-04-23T23:14:31.106043848Z" level=info msg="Container 504c9d83d66486ac3250690822ecfddf1c1905d43e1e7c3b8ca7c39bb60c6124: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:14:31.106611 containerd[1511]: time="2026-04-23T23:14:31.106586925Z" level=info msg="Container 3e1a649323326d538de1f7158ff1ac7a0d208065b1b5742dacbc60b00d931659: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:14:31.119158 containerd[1511]: time="2026-04-23T23:14:31.119109583Z" level=info msg="CreateContainer within sandbox \"ef74ba565af5bb5861e805b20646ef99dcf0648a58dd0d1feba189528e5c4e6c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"504c9d83d66486ac3250690822ecfddf1c1905d43e1e7c3b8ca7c39bb60c6124\"" Apr 23 23:14:31.120232 containerd[1511]: time="2026-04-23T23:14:31.120204498Z" level=info msg="StartContainer for \"504c9d83d66486ac3250690822ecfddf1c1905d43e1e7c3b8ca7c39bb60c6124\"" Apr 23 23:14:31.120412 containerd[1511]: time="2026-04-23T23:14:31.120375377Z" level=info msg="CreateContainer within sandbox \"e61b4625be277cdebf632c9e1ef3920c64874ee339c418f527501d969e3ceefa\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c1cb7eb991dca483b5006d3d26e201a35d540874040925ab3732aabb4bfdc07d\"" Apr 23 23:14:31.121328 containerd[1511]: time="2026-04-23T23:14:31.121295372Z" level=info msg="connecting to shim 504c9d83d66486ac3250690822ecfddf1c1905d43e1e7c3b8ca7c39bb60c6124" address="unix:///run/containerd/s/9912e71f37889ab32ec3b0c5c575963e525bed013e64a3aaaa64699cbbeb0e6c" protocol=ttrpc version=3 Apr 23 23:14:31.121591 containerd[1511]: time="2026-04-23T23:14:31.121554251Z" level=info msg="CreateContainer within sandbox \"e4dd57737d6abd81426f90d819a3a61157faf5ff2a18b608c1767833a56f3ee6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3e1a649323326d538de1f7158ff1ac7a0d208065b1b5742dacbc60b00d931659\"" Apr 23 23:14:31.122202 containerd[1511]: time="2026-04-23T23:14:31.122174088Z" level=info msg="StartContainer for \"3e1a649323326d538de1f7158ff1ac7a0d208065b1b5742dacbc60b00d931659\"" Apr 23 23:14:31.122358 containerd[1511]: time="2026-04-23T23:14:31.122333927Z" level=info msg="StartContainer for \"c1cb7eb991dca483b5006d3d26e201a35d540874040925ab3732aabb4bfdc07d\"" Apr 23 23:14:31.123932 containerd[1511]: time="2026-04-23T23:14:31.123894439Z" level=info msg="connecting to shim 3e1a649323326d538de1f7158ff1ac7a0d208065b1b5742dacbc60b00d931659" address="unix:///run/containerd/s/bcc06808f9850123a58b7809ab636e32f41459bd8c2aaeb01fba7af034c6725b" protocol=ttrpc version=3 Apr 23 23:14:31.124415 containerd[1511]: time="2026-04-23T23:14:31.124372877Z" level=info msg="connecting to shim c1cb7eb991dca483b5006d3d26e201a35d540874040925ab3732aabb4bfdc07d" address="unix:///run/containerd/s/e1958d69945ef5eaf3b85e0412d65ee827874803d1294ba8218a800617ad536f" protocol=ttrpc version=3 Apr 23 23:14:31.145926 systemd[1]: Started cri-containerd-3e1a649323326d538de1f7158ff1ac7a0d208065b1b5742dacbc60b00d931659.scope - libcontainer container 3e1a649323326d538de1f7158ff1ac7a0d208065b1b5742dacbc60b00d931659. Apr 23 23:14:31.149680 systemd[1]: Started cri-containerd-504c9d83d66486ac3250690822ecfddf1c1905d43e1e7c3b8ca7c39bb60c6124.scope - libcontainer container 504c9d83d66486ac3250690822ecfddf1c1905d43e1e7c3b8ca7c39bb60c6124. Apr 23 23:14:31.155223 kubelet[2382]: E0423 23:14:31.155175 2382 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://138.199.150.149:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 138.199.150.149:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 23 23:14:31.163956 systemd[1]: Started cri-containerd-c1cb7eb991dca483b5006d3d26e201a35d540874040925ab3732aabb4bfdc07d.scope - libcontainer container c1cb7eb991dca483b5006d3d26e201a35d540874040925ab3732aabb4bfdc07d. Apr 23 23:14:31.211939 kubelet[2382]: E0423 23:14:31.211031 2382 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://138.199.150.149:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-4-n-08a122edc2&limit=500&resourceVersion=0\": dial tcp 138.199.150.149:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 23 23:14:31.242297 containerd[1511]: time="2026-04-23T23:14:31.242202810Z" level=info msg="StartContainer for \"3e1a649323326d538de1f7158ff1ac7a0d208065b1b5742dacbc60b00d931659\" returns successfully" Apr 23 23:14:31.242516 containerd[1511]: time="2026-04-23T23:14:31.242393729Z" level=info msg="StartContainer for \"c1cb7eb991dca483b5006d3d26e201a35d540874040925ab3732aabb4bfdc07d\" returns successfully" Apr 23 23:14:31.244838 containerd[1511]: time="2026-04-23T23:14:31.244777757Z" level=info msg="StartContainer for \"504c9d83d66486ac3250690822ecfddf1c1905d43e1e7c3b8ca7c39bb60c6124\" returns successfully" Apr 23 23:14:31.323618 kubelet[2382]: E0423 23:14:31.323578 2382 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.150.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-08a122edc2?timeout=10s\": dial tcp 138.199.150.149:6443: connect: connection refused" interval="1.6s" Apr 23 23:14:31.499546 kubelet[2382]: I0423 23:14:31.499133 2382 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-08a122edc2" Apr 23 23:14:31.958741 kubelet[2382]: E0423 23:14:31.958508 2382 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-08a122edc2\" not found" node="ci-4459-2-4-n-08a122edc2" Apr 23 23:14:31.963646 kubelet[2382]: E0423 23:14:31.963597 2382 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-08a122edc2\" not found" node="ci-4459-2-4-n-08a122edc2" Apr 23 23:14:31.967727 kubelet[2382]: E0423 23:14:31.966330 2382 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-08a122edc2\" not found" node="ci-4459-2-4-n-08a122edc2" Apr 23 23:14:32.969282 kubelet[2382]: E0423 23:14:32.969249 2382 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-08a122edc2\" not found" node="ci-4459-2-4-n-08a122edc2" Apr 23 23:14:32.970840 kubelet[2382]: E0423 23:14:32.970814 2382 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-08a122edc2\" not found" node="ci-4459-2-4-n-08a122edc2" Apr 23 23:14:33.189763 kubelet[2382]: E0423 23:14:33.189721 2382 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459-2-4-n-08a122edc2\" not found" node="ci-4459-2-4-n-08a122edc2" Apr 23 23:14:33.276744 kubelet[2382]: I0423 23:14:33.275905 2382 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-4-n-08a122edc2" Apr 23 23:14:33.276744 kubelet[2382]: E0423 23:14:33.275950 2382 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4459-2-4-n-08a122edc2\": node \"ci-4459-2-4-n-08a122edc2\" not found" Apr 23 23:14:33.397655 kubelet[2382]: E0423 23:14:33.397610 2382 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-08a122edc2\" not found" Apr 23 23:14:33.498648 kubelet[2382]: E0423 23:14:33.498585 2382 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-08a122edc2\" not found" Apr 23 23:14:33.599346 kubelet[2382]: E0423 23:14:33.599268 2382 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-08a122edc2\" not found" Apr 23 23:14:33.700303 kubelet[2382]: E0423 23:14:33.700229 2382 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-08a122edc2\" not found" Apr 23 23:14:33.801277 kubelet[2382]: E0423 23:14:33.801099 2382 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-08a122edc2\" not found" Apr 23 23:14:33.902153 kubelet[2382]: E0423 23:14:33.901988 2382 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-08a122edc2\" not found" Apr 23 23:14:33.973137 kubelet[2382]: E0423 23:14:33.972901 2382 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-08a122edc2\" not found" node="ci-4459-2-4-n-08a122edc2" Apr 23 23:14:33.973137 kubelet[2382]: E0423 23:14:33.973037 2382 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-08a122edc2\" not found" node="ci-4459-2-4-n-08a122edc2" Apr 23 23:14:34.002661 kubelet[2382]: E0423 23:14:34.002602 2382 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-08a122edc2\" not found" Apr 23 23:14:34.120104 kubelet[2382]: I0423 23:14:34.120050 2382 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-4-n-08a122edc2" Apr 23 23:14:34.132629 kubelet[2382]: I0423 23:14:34.132595 2382 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-08a122edc2" Apr 23 23:14:34.138326 kubelet[2382]: I0423 23:14:34.138294 2382 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-4-n-08a122edc2" Apr 23 23:14:34.898855 kubelet[2382]: I0423 23:14:34.898789 2382 apiserver.go:52] "Watching apiserver" Apr 23 23:14:34.921275 kubelet[2382]: I0423 23:14:34.921200 2382 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 23 23:14:35.227334 systemd[1]: Reload requested from client PID 2668 ('systemctl') (unit session-7.scope)... Apr 23 23:14:35.227355 systemd[1]: Reloading... Apr 23 23:14:35.320738 zram_generator::config[2712]: No configuration found. Apr 23 23:14:35.527036 systemd[1]: Reloading finished in 299 ms. Apr 23 23:14:35.556230 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 23 23:14:35.568237 systemd[1]: kubelet.service: Deactivated successfully. Apr 23 23:14:35.568681 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 23 23:14:35.568795 systemd[1]: kubelet.service: Consumed 1.531s CPU time, 121.5M memory peak. Apr 23 23:14:35.571826 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 23 23:14:35.736769 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 23 23:14:35.747078 (kubelet)[2756]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 23 23:14:35.796305 kubelet[2756]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 23 23:14:35.796305 kubelet[2756]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 23 23:14:35.796631 kubelet[2756]: I0423 23:14:35.796391 2756 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 23 23:14:35.807064 kubelet[2756]: I0423 23:14:35.807003 2756 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 23 23:14:35.807064 kubelet[2756]: I0423 23:14:35.807032 2756 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 23 23:14:35.807208 kubelet[2756]: I0423 23:14:35.807099 2756 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 23 23:14:35.807208 kubelet[2756]: I0423 23:14:35.807108 2756 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 23 23:14:35.807354 kubelet[2756]: I0423 23:14:35.807336 2756 server.go:956] "Client rotation is on, will bootstrap in background" Apr 23 23:14:35.808670 kubelet[2756]: I0423 23:14:35.808641 2756 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 23 23:14:35.812154 kubelet[2756]: I0423 23:14:35.811335 2756 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 23 23:14:35.819922 kubelet[2756]: I0423 23:14:35.819849 2756 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 23 23:14:35.824242 kubelet[2756]: I0423 23:14:35.824208 2756 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 23 23:14:35.824652 kubelet[2756]: I0423 23:14:35.824525 2756 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 23 23:14:35.824915 kubelet[2756]: I0423 23:14:35.824624 2756 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-4-n-08a122edc2","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 23 23:14:35.824915 kubelet[2756]: I0423 23:14:35.824908 2756 topology_manager.go:138] "Creating topology manager with none policy" Apr 23 23:14:35.825024 kubelet[2756]: I0423 23:14:35.824923 2756 container_manager_linux.go:306] "Creating device plugin manager" Apr 23 23:14:35.825024 kubelet[2756]: I0423 23:14:35.824956 2756 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 23 23:14:35.825219 kubelet[2756]: I0423 23:14:35.825204 2756 state_mem.go:36] "Initialized new in-memory state store" Apr 23 23:14:35.825457 kubelet[2756]: I0423 23:14:35.825422 2756 kubelet.go:475] "Attempting to sync node with API server" Apr 23 23:14:35.825457 kubelet[2756]: I0423 23:14:35.825448 2756 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 23 23:14:35.826102 kubelet[2756]: I0423 23:14:35.826075 2756 kubelet.go:387] "Adding apiserver pod source" Apr 23 23:14:35.826155 kubelet[2756]: I0423 23:14:35.826138 2756 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 23 23:14:35.828744 kubelet[2756]: I0423 23:14:35.828718 2756 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 23 23:14:35.829368 kubelet[2756]: I0423 23:14:35.829337 2756 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 23 23:14:35.829368 kubelet[2756]: I0423 23:14:35.829371 2756 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 23 23:14:35.837910 kubelet[2756]: I0423 23:14:35.837046 2756 server.go:1262] "Started kubelet" Apr 23 23:14:35.841505 kubelet[2756]: I0423 23:14:35.840655 2756 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 23 23:14:35.848723 kubelet[2756]: I0423 23:14:35.848015 2756 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 23 23:14:35.850177 kubelet[2756]: I0423 23:14:35.850154 2756 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 23 23:14:35.850740 kubelet[2756]: I0423 23:14:35.850247 2756 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 23 23:14:35.850740 kubelet[2756]: I0423 23:14:35.850304 2756 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 23 23:14:35.850740 kubelet[2756]: I0423 23:14:35.850498 2756 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 23 23:14:35.850997 kubelet[2756]: I0423 23:14:35.850982 2756 server.go:310] "Adding debug handlers to kubelet server" Apr 23 23:14:35.852877 kubelet[2756]: E0423 23:14:35.852851 2756 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-08a122edc2\" not found" Apr 23 23:14:35.854714 kubelet[2756]: I0423 23:14:35.853756 2756 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 23 23:14:35.856447 kubelet[2756]: I0423 23:14:35.856422 2756 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 23 23:14:35.856603 kubelet[2756]: I0423 23:14:35.856578 2756 reconciler.go:29] "Reconciler: start to sync state" Apr 23 23:14:35.872085 kubelet[2756]: I0423 23:14:35.872049 2756 factory.go:223] Registration of the systemd container factory successfully Apr 23 23:14:35.872296 kubelet[2756]: I0423 23:14:35.872265 2756 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 23 23:14:35.878370 kubelet[2756]: I0423 23:14:35.878240 2756 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 23 23:14:35.880933 kubelet[2756]: I0423 23:14:35.880893 2756 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 23 23:14:35.881097 kubelet[2756]: I0423 23:14:35.881087 2756 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 23 23:14:35.881247 kubelet[2756]: I0423 23:14:35.881151 2756 kubelet.go:2428] "Starting kubelet main sync loop" Apr 23 23:14:35.881247 kubelet[2756]: E0423 23:14:35.881212 2756 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 23 23:14:35.886116 kubelet[2756]: I0423 23:14:35.886050 2756 factory.go:223] Registration of the containerd container factory successfully Apr 23 23:14:35.909939 kubelet[2756]: E0423 23:14:35.909244 2756 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 23 23:14:35.943281 kubelet[2756]: I0423 23:14:35.943209 2756 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 23 23:14:35.943281 kubelet[2756]: I0423 23:14:35.943234 2756 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 23 23:14:35.943281 kubelet[2756]: I0423 23:14:35.943259 2756 state_mem.go:36] "Initialized new in-memory state store" Apr 23 23:14:35.943725 kubelet[2756]: I0423 23:14:35.943382 2756 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 23 23:14:35.943725 kubelet[2756]: I0423 23:14:35.943391 2756 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 23 23:14:35.943725 kubelet[2756]: I0423 23:14:35.943406 2756 policy_none.go:49] "None policy: Start" Apr 23 23:14:35.943725 kubelet[2756]: I0423 23:14:35.943414 2756 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 23 23:14:35.943725 kubelet[2756]: I0423 23:14:35.943421 2756 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 23 23:14:35.943725 kubelet[2756]: I0423 23:14:35.943510 2756 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 23 23:14:35.943725 kubelet[2756]: I0423 23:14:35.943517 2756 policy_none.go:47] "Start" Apr 23 23:14:35.958509 kubelet[2756]: E0423 23:14:35.958178 2756 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 23 23:14:35.961600 kubelet[2756]: I0423 23:14:35.961300 2756 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 23 23:14:35.961600 kubelet[2756]: I0423 23:14:35.961440 2756 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 23 23:14:35.962304 kubelet[2756]: I0423 23:14:35.962122 2756 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 23 23:14:35.963529 kubelet[2756]: E0423 23:14:35.963267 2756 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 23 23:14:35.982830 kubelet[2756]: I0423 23:14:35.982589 2756 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-4-n-08a122edc2" Apr 23 23:14:35.983243 kubelet[2756]: I0423 23:14:35.982651 2756 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-4-n-08a122edc2" Apr 23 23:14:35.983379 kubelet[2756]: I0423 23:14:35.982865 2756 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-08a122edc2" Apr 23 23:14:35.991111 kubelet[2756]: E0423 23:14:35.991079 2756 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-4-n-08a122edc2\" already exists" pod="kube-system/kube-apiserver-ci-4459-2-4-n-08a122edc2" Apr 23 23:14:35.992027 kubelet[2756]: E0423 23:14:35.992000 2756 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-4-n-08a122edc2\" already exists" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-08a122edc2" Apr 23 23:14:35.992452 kubelet[2756]: E0423 23:14:35.992426 2756 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-4-n-08a122edc2\" already exists" pod="kube-system/kube-scheduler-ci-4459-2-4-n-08a122edc2" Apr 23 23:14:36.073803 kubelet[2756]: I0423 23:14:36.073586 2756 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-08a122edc2" Apr 23 23:14:36.086783 kubelet[2756]: I0423 23:14:36.086746 2756 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459-2-4-n-08a122edc2" Apr 23 23:14:36.087247 kubelet[2756]: I0423 23:14:36.087013 2756 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-4-n-08a122edc2" Apr 23 23:14:36.157555 kubelet[2756]: I0423 23:14:36.157137 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dca84f47fe669f7a22b3f62fed94d31c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-4-n-08a122edc2\" (UID: \"dca84f47fe669f7a22b3f62fed94d31c\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-08a122edc2" Apr 23 23:14:36.157555 kubelet[2756]: I0423 23:14:36.157195 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/62a9baf81a68cb45863cc50fb724b239-ca-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-08a122edc2\" (UID: \"62a9baf81a68cb45863cc50fb724b239\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-08a122edc2" Apr 23 23:14:36.157555 kubelet[2756]: I0423 23:14:36.157223 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/62a9baf81a68cb45863cc50fb724b239-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-4-n-08a122edc2\" (UID: \"62a9baf81a68cb45863cc50fb724b239\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-08a122edc2" Apr 23 23:14:36.157555 kubelet[2756]: I0423 23:14:36.157251 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/62a9baf81a68cb45863cc50fb724b239-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-08a122edc2\" (UID: \"62a9baf81a68cb45863cc50fb724b239\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-08a122edc2" Apr 23 23:14:36.157555 kubelet[2756]: I0423 23:14:36.157282 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/62a9baf81a68cb45863cc50fb724b239-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-4-n-08a122edc2\" (UID: \"62a9baf81a68cb45863cc50fb724b239\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-08a122edc2" Apr 23 23:14:36.157921 kubelet[2756]: I0423 23:14:36.157309 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f1412923ac2b92ca293cf970e44a529c-kubeconfig\") pod \"kube-scheduler-ci-4459-2-4-n-08a122edc2\" (UID: \"f1412923ac2b92ca293cf970e44a529c\") " pod="kube-system/kube-scheduler-ci-4459-2-4-n-08a122edc2" Apr 23 23:14:36.157921 kubelet[2756]: I0423 23:14:36.157332 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dca84f47fe669f7a22b3f62fed94d31c-ca-certs\") pod \"kube-apiserver-ci-4459-2-4-n-08a122edc2\" (UID: \"dca84f47fe669f7a22b3f62fed94d31c\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-08a122edc2" Apr 23 23:14:36.157921 kubelet[2756]: I0423 23:14:36.157378 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dca84f47fe669f7a22b3f62fed94d31c-k8s-certs\") pod \"kube-apiserver-ci-4459-2-4-n-08a122edc2\" (UID: \"dca84f47fe669f7a22b3f62fed94d31c\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-08a122edc2" Apr 23 23:14:36.157921 kubelet[2756]: I0423 23:14:36.157409 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/62a9baf81a68cb45863cc50fb724b239-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-4-n-08a122edc2\" (UID: \"62a9baf81a68cb45863cc50fb724b239\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-08a122edc2" Apr 23 23:14:36.229794 sudo[2795]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 23 23:14:36.230074 sudo[2795]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 23 23:14:36.583836 sudo[2795]: pam_unix(sudo:session): session closed for user root Apr 23 23:14:36.840816 kubelet[2756]: I0423 23:14:36.840513 2756 apiserver.go:52] "Watching apiserver" Apr 23 23:14:36.856958 kubelet[2756]: I0423 23:14:36.856910 2756 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 23 23:14:36.923229 kubelet[2756]: I0423 23:14:36.923195 2756 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-4-n-08a122edc2" Apr 23 23:14:36.923655 kubelet[2756]: I0423 23:14:36.923634 2756 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-4-n-08a122edc2" Apr 23 23:14:36.933607 kubelet[2756]: E0423 23:14:36.933557 2756 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-4-n-08a122edc2\" already exists" pod="kube-system/kube-scheduler-ci-4459-2-4-n-08a122edc2" Apr 23 23:14:36.937715 kubelet[2756]: E0423 23:14:36.936587 2756 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-4-n-08a122edc2\" already exists" pod="kube-system/kube-apiserver-ci-4459-2-4-n-08a122edc2" Apr 23 23:14:36.954091 kubelet[2756]: I0423 23:14:36.953422 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-2-4-n-08a122edc2" podStartSLOduration=2.953404414 podStartE2EDuration="2.953404414s" podCreationTimestamp="2026-04-23 23:14:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 23:14:36.951254303 +0000 UTC m=+1.199365544" watchObservedRunningTime="2026-04-23 23:14:36.953404414 +0000 UTC m=+1.201515655" Apr 23 23:14:36.984741 kubelet[2756]: I0423 23:14:36.984669 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-08a122edc2" podStartSLOduration=2.984638125 podStartE2EDuration="2.984638125s" podCreationTimestamp="2026-04-23 23:14:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 23:14:36.983751849 +0000 UTC m=+1.231863090" watchObservedRunningTime="2026-04-23 23:14:36.984638125 +0000 UTC m=+1.232749406" Apr 23 23:14:36.984912 kubelet[2756]: I0423 23:14:36.984769 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-2-4-n-08a122edc2" podStartSLOduration=2.984764045 podStartE2EDuration="2.984764045s" podCreationTimestamp="2026-04-23 23:14:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 23:14:36.968280233 +0000 UTC m=+1.216391474" watchObservedRunningTime="2026-04-23 23:14:36.984764045 +0000 UTC m=+1.232875286" Apr 23 23:14:38.260531 sudo[1809]: pam_unix(sudo:session): session closed for user root Apr 23 23:14:38.276753 sshd[1808]: Connection closed by 50.85.169.122 port 48604 Apr 23 23:14:38.276975 sshd-session[1805]: pam_unix(sshd:session): session closed for user core Apr 23 23:14:38.283575 systemd[1]: sshd@6-138.199.150.149:22-50.85.169.122:48604.service: Deactivated successfully. Apr 23 23:14:38.287183 systemd[1]: session-7.scope: Deactivated successfully. Apr 23 23:14:38.287489 systemd[1]: session-7.scope: Consumed 7.946s CPU time, 266.1M memory peak. Apr 23 23:14:38.291042 systemd-logind[1483]: Session 7 logged out. Waiting for processes to exit. Apr 23 23:14:38.293126 systemd-logind[1483]: Removed session 7. Apr 23 23:14:40.693904 kubelet[2756]: I0423 23:14:40.693854 2756 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 23 23:14:40.694360 containerd[1511]: time="2026-04-23T23:14:40.694275184Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 23 23:14:40.695164 kubelet[2756]: I0423 23:14:40.694613 2756 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 23 23:14:41.826144 systemd[1]: Created slice kubepods-besteffort-podd9b4bdf5_049f_4213_ab15_8e1f75a1a717.slice - libcontainer container kubepods-besteffort-podd9b4bdf5_049f_4213_ab15_8e1f75a1a717.slice. Apr 23 23:14:41.854645 systemd[1]: Created slice kubepods-burstable-pod27160d2e_7fb2_49bf_9ea8_dd843baea345.slice - libcontainer container kubepods-burstable-pod27160d2e_7fb2_49bf_9ea8_dd843baea345.slice. Apr 23 23:14:41.894375 kubelet[2756]: I0423 23:14:41.894321 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d9b4bdf5-049f-4213-ab15-8e1f75a1a717-kube-proxy\") pod \"kube-proxy-glqxj\" (UID: \"d9b4bdf5-049f-4213-ab15-8e1f75a1a717\") " pod="kube-system/kube-proxy-glqxj" Apr 23 23:14:41.894375 kubelet[2756]: I0423 23:14:41.894373 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj95q\" (UniqueName: \"kubernetes.io/projected/d9b4bdf5-049f-4213-ab15-8e1f75a1a717-kube-api-access-zj95q\") pod \"kube-proxy-glqxj\" (UID: \"d9b4bdf5-049f-4213-ab15-8e1f75a1a717\") " pod="kube-system/kube-proxy-glqxj" Apr 23 23:14:41.895859 kubelet[2756]: I0423 23:14:41.894392 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/27160d2e-7fb2-49bf-9ea8-dd843baea345-bpf-maps\") pod \"cilium-4z52c\" (UID: \"27160d2e-7fb2-49bf-9ea8-dd843baea345\") " pod="kube-system/cilium-4z52c" Apr 23 23:14:41.895859 kubelet[2756]: I0423 23:14:41.894406 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/27160d2e-7fb2-49bf-9ea8-dd843baea345-cilium-cgroup\") pod \"cilium-4z52c\" (UID: \"27160d2e-7fb2-49bf-9ea8-dd843baea345\") " pod="kube-system/cilium-4z52c" Apr 23 23:14:41.895859 kubelet[2756]: I0423 23:14:41.894440 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/27160d2e-7fb2-49bf-9ea8-dd843baea345-lib-modules\") pod \"cilium-4z52c\" (UID: \"27160d2e-7fb2-49bf-9ea8-dd843baea345\") " pod="kube-system/cilium-4z52c" Apr 23 23:14:41.895859 kubelet[2756]: I0423 23:14:41.894457 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/27160d2e-7fb2-49bf-9ea8-dd843baea345-cilium-config-path\") pod \"cilium-4z52c\" (UID: \"27160d2e-7fb2-49bf-9ea8-dd843baea345\") " pod="kube-system/cilium-4z52c" Apr 23 23:14:41.895859 kubelet[2756]: I0423 23:14:41.894472 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/27160d2e-7fb2-49bf-9ea8-dd843baea345-host-proc-sys-net\") pod \"cilium-4z52c\" (UID: \"27160d2e-7fb2-49bf-9ea8-dd843baea345\") " pod="kube-system/cilium-4z52c" Apr 23 23:14:41.895859 kubelet[2756]: I0423 23:14:41.894499 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9b4bdf5-049f-4213-ab15-8e1f75a1a717-lib-modules\") pod \"kube-proxy-glqxj\" (UID: \"d9b4bdf5-049f-4213-ab15-8e1f75a1a717\") " pod="kube-system/kube-proxy-glqxj" Apr 23 23:14:41.895988 kubelet[2756]: I0423 23:14:41.894514 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/27160d2e-7fb2-49bf-9ea8-dd843baea345-hostproc\") pod \"cilium-4z52c\" (UID: \"27160d2e-7fb2-49bf-9ea8-dd843baea345\") " pod="kube-system/cilium-4z52c" Apr 23 23:14:41.895988 kubelet[2756]: I0423 23:14:41.894530 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/27160d2e-7fb2-49bf-9ea8-dd843baea345-etc-cni-netd\") pod \"cilium-4z52c\" (UID: \"27160d2e-7fb2-49bf-9ea8-dd843baea345\") " pod="kube-system/cilium-4z52c" Apr 23 23:14:41.895988 kubelet[2756]: I0423 23:14:41.894546 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/27160d2e-7fb2-49bf-9ea8-dd843baea345-clustermesh-secrets\") pod \"cilium-4z52c\" (UID: \"27160d2e-7fb2-49bf-9ea8-dd843baea345\") " pod="kube-system/cilium-4z52c" Apr 23 23:14:41.895988 kubelet[2756]: I0423 23:14:41.894560 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/27160d2e-7fb2-49bf-9ea8-dd843baea345-host-proc-sys-kernel\") pod \"cilium-4z52c\" (UID: \"27160d2e-7fb2-49bf-9ea8-dd843baea345\") " pod="kube-system/cilium-4z52c" Apr 23 23:14:41.895988 kubelet[2756]: I0423 23:14:41.894583 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/27160d2e-7fb2-49bf-9ea8-dd843baea345-cilium-run\") pod \"cilium-4z52c\" (UID: \"27160d2e-7fb2-49bf-9ea8-dd843baea345\") " pod="kube-system/cilium-4z52c" Apr 23 23:14:41.895988 kubelet[2756]: I0423 23:14:41.894598 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/27160d2e-7fb2-49bf-9ea8-dd843baea345-xtables-lock\") pod \"cilium-4z52c\" (UID: \"27160d2e-7fb2-49bf-9ea8-dd843baea345\") " pod="kube-system/cilium-4z52c" Apr 23 23:14:41.896118 kubelet[2756]: I0423 23:14:41.894613 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/27160d2e-7fb2-49bf-9ea8-dd843baea345-hubble-tls\") pod \"cilium-4z52c\" (UID: \"27160d2e-7fb2-49bf-9ea8-dd843baea345\") " pod="kube-system/cilium-4z52c" Apr 23 23:14:41.896118 kubelet[2756]: I0423 23:14:41.894626 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zf4r\" (UniqueName: \"kubernetes.io/projected/27160d2e-7fb2-49bf-9ea8-dd843baea345-kube-api-access-8zf4r\") pod \"cilium-4z52c\" (UID: \"27160d2e-7fb2-49bf-9ea8-dd843baea345\") " pod="kube-system/cilium-4z52c" Apr 23 23:14:41.896118 kubelet[2756]: I0423 23:14:41.894649 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d9b4bdf5-049f-4213-ab15-8e1f75a1a717-xtables-lock\") pod \"kube-proxy-glqxj\" (UID: \"d9b4bdf5-049f-4213-ab15-8e1f75a1a717\") " pod="kube-system/kube-proxy-glqxj" Apr 23 23:14:41.896118 kubelet[2756]: I0423 23:14:41.894679 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/27160d2e-7fb2-49bf-9ea8-dd843baea345-cni-path\") pod \"cilium-4z52c\" (UID: \"27160d2e-7fb2-49bf-9ea8-dd843baea345\") " pod="kube-system/cilium-4z52c" Apr 23 23:14:41.969450 systemd[1]: Created slice kubepods-besteffort-pode34d5373_becc_4121_908b_e6bf799173fd.slice - libcontainer container kubepods-besteffort-pode34d5373_becc_4121_908b_e6bf799173fd.slice. Apr 23 23:14:41.998144 kubelet[2756]: I0423 23:14:41.997942 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5xbl\" (UniqueName: \"kubernetes.io/projected/e34d5373-becc-4121-908b-e6bf799173fd-kube-api-access-j5xbl\") pod \"cilium-operator-6f9c7c5859-9kwwd\" (UID: \"e34d5373-becc-4121-908b-e6bf799173fd\") " pod="kube-system/cilium-operator-6f9c7c5859-9kwwd" Apr 23 23:14:41.998732 kubelet[2756]: I0423 23:14:41.998328 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e34d5373-becc-4121-908b-e6bf799173fd-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-9kwwd\" (UID: \"e34d5373-becc-4121-908b-e6bf799173fd\") " pod="kube-system/cilium-operator-6f9c7c5859-9kwwd" Apr 23 23:14:42.141783 containerd[1511]: time="2026-04-23T23:14:42.141621872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-glqxj,Uid:d9b4bdf5-049f-4213-ab15-8e1f75a1a717,Namespace:kube-system,Attempt:0,}" Apr 23 23:14:42.160724 containerd[1511]: time="2026-04-23T23:14:42.160663647Z" level=info msg="connecting to shim fc8aa9db90a1c97f84ef1012845cd0bdf9700edc92f8a7127a24a33f80111da9" address="unix:///run/containerd/s/49c2fa7218098f2ca9cca9dcaf845606fe4b27e7523ccddf8feddfdfcf892d08" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:14:42.162496 containerd[1511]: time="2026-04-23T23:14:42.162281602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4z52c,Uid:27160d2e-7fb2-49bf-9ea8-dd843baea345,Namespace:kube-system,Attempt:0,}" Apr 23 23:14:42.185970 systemd[1]: Started cri-containerd-fc8aa9db90a1c97f84ef1012845cd0bdf9700edc92f8a7127a24a33f80111da9.scope - libcontainer container fc8aa9db90a1c97f84ef1012845cd0bdf9700edc92f8a7127a24a33f80111da9. Apr 23 23:14:42.190645 containerd[1511]: time="2026-04-23T23:14:42.190392627Z" level=info msg="connecting to shim 4d87aba632fe564e490fa6e648b734efe7e65263436d6e2bbacd0a63393a80d1" address="unix:///run/containerd/s/d5ad06593cdc5b309d92f1873c578fce5a9829f739b060965a28722cea9e6960" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:14:42.218985 systemd[1]: Started cri-containerd-4d87aba632fe564e490fa6e648b734efe7e65263436d6e2bbacd0a63393a80d1.scope - libcontainer container 4d87aba632fe564e490fa6e648b734efe7e65263436d6e2bbacd0a63393a80d1. Apr 23 23:14:42.225282 containerd[1511]: time="2026-04-23T23:14:42.224720070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-glqxj,Uid:d9b4bdf5-049f-4213-ab15-8e1f75a1a717,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc8aa9db90a1c97f84ef1012845cd0bdf9700edc92f8a7127a24a33f80111da9\"" Apr 23 23:14:42.234009 containerd[1511]: time="2026-04-23T23:14:42.233807639Z" level=info msg="CreateContainer within sandbox \"fc8aa9db90a1c97f84ef1012845cd0bdf9700edc92f8a7127a24a33f80111da9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 23 23:14:42.248105 containerd[1511]: time="2026-04-23T23:14:42.248065151Z" level=info msg="Container 5039134035560d047907ea5fc0be1ad21df5ec71ca1a47609164e2cb5e8439fd: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:14:42.255765 containerd[1511]: time="2026-04-23T23:14:42.255697325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4z52c,Uid:27160d2e-7fb2-49bf-9ea8-dd843baea345,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d87aba632fe564e490fa6e648b734efe7e65263436d6e2bbacd0a63393a80d1\"" Apr 23 23:14:42.257227 containerd[1511]: time="2026-04-23T23:14:42.257137800Z" level=info msg="CreateContainer within sandbox \"fc8aa9db90a1c97f84ef1012845cd0bdf9700edc92f8a7127a24a33f80111da9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5039134035560d047907ea5fc0be1ad21df5ec71ca1a47609164e2cb5e8439fd\"" Apr 23 23:14:42.257602 containerd[1511]: time="2026-04-23T23:14:42.257575759Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 23 23:14:42.258303 containerd[1511]: time="2026-04-23T23:14:42.258253277Z" level=info msg="StartContainer for \"5039134035560d047907ea5fc0be1ad21df5ec71ca1a47609164e2cb5e8439fd\"" Apr 23 23:14:42.261138 containerd[1511]: time="2026-04-23T23:14:42.261103587Z" level=info msg="connecting to shim 5039134035560d047907ea5fc0be1ad21df5ec71ca1a47609164e2cb5e8439fd" address="unix:///run/containerd/s/49c2fa7218098f2ca9cca9dcaf845606fe4b27e7523ccddf8feddfdfcf892d08" protocol=ttrpc version=3 Apr 23 23:14:42.275314 containerd[1511]: time="2026-04-23T23:14:42.275276059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-9kwwd,Uid:e34d5373-becc-4121-908b-e6bf799173fd,Namespace:kube-system,Attempt:0,}" Apr 23 23:14:42.281881 systemd[1]: Started cri-containerd-5039134035560d047907ea5fc0be1ad21df5ec71ca1a47609164e2cb5e8439fd.scope - libcontainer container 5039134035560d047907ea5fc0be1ad21df5ec71ca1a47609164e2cb5e8439fd. Apr 23 23:14:42.294442 containerd[1511]: time="2026-04-23T23:14:42.294382274Z" level=info msg="connecting to shim 23890f37c491a649611c3131fd0b62333d0f1b4e6a6e2c3876b6bcc088fc4c74" address="unix:///run/containerd/s/57b9180a6c008361f9ffbd8457c36030c72ad5940cfcd0b4a2d3195625b85bcf" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:14:42.328000 systemd[1]: Started cri-containerd-23890f37c491a649611c3131fd0b62333d0f1b4e6a6e2c3876b6bcc088fc4c74.scope - libcontainer container 23890f37c491a649611c3131fd0b62333d0f1b4e6a6e2c3876b6bcc088fc4c74. Apr 23 23:14:42.370974 containerd[1511]: time="2026-04-23T23:14:42.370905775Z" level=info msg="StartContainer for \"5039134035560d047907ea5fc0be1ad21df5ec71ca1a47609164e2cb5e8439fd\" returns successfully" Apr 23 23:14:42.388884 containerd[1511]: time="2026-04-23T23:14:42.388823834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-9kwwd,Uid:e34d5373-becc-4121-908b-e6bf799173fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"23890f37c491a649611c3131fd0b62333d0f1b4e6a6e2c3876b6bcc088fc4c74\"" Apr 23 23:14:46.268587 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1815411060.mount: Deactivated successfully. Apr 23 23:14:47.656107 containerd[1511]: time="2026-04-23T23:14:47.656019650Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:47.658024 containerd[1511]: time="2026-04-23T23:14:47.657989004Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Apr 23 23:14:47.660750 containerd[1511]: time="2026-04-23T23:14:47.660100158Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:47.663074 containerd[1511]: time="2026-04-23T23:14:47.663040670Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.405336432s" Apr 23 23:14:47.663188 containerd[1511]: time="2026-04-23T23:14:47.663173189Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Apr 23 23:14:47.670561 containerd[1511]: time="2026-04-23T23:14:47.670501727Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 23 23:14:47.678201 containerd[1511]: time="2026-04-23T23:14:47.678148865Z" level=info msg="CreateContainer within sandbox \"4d87aba632fe564e490fa6e648b734efe7e65263436d6e2bbacd0a63393a80d1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 23 23:14:47.689252 containerd[1511]: time="2026-04-23T23:14:47.689207632Z" level=info msg="Container d0501e0256930d6030924a03380fcc919a0d27df5ab59d4d8e43f12c92977571: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:14:47.693017 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1241675261.mount: Deactivated successfully. Apr 23 23:14:47.695839 containerd[1511]: time="2026-04-23T23:14:47.695736693Z" level=info msg="CreateContainer within sandbox \"4d87aba632fe564e490fa6e648b734efe7e65263436d6e2bbacd0a63393a80d1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d0501e0256930d6030924a03380fcc919a0d27df5ab59d4d8e43f12c92977571\"" Apr 23 23:14:47.696249 containerd[1511]: time="2026-04-23T23:14:47.696223211Z" level=info msg="StartContainer for \"d0501e0256930d6030924a03380fcc919a0d27df5ab59d4d8e43f12c92977571\"" Apr 23 23:14:47.699074 containerd[1511]: time="2026-04-23T23:14:47.699018483Z" level=info msg="connecting to shim d0501e0256930d6030924a03380fcc919a0d27df5ab59d4d8e43f12c92977571" address="unix:///run/containerd/s/d5ad06593cdc5b309d92f1873c578fce5a9829f739b060965a28722cea9e6960" protocol=ttrpc version=3 Apr 23 23:14:47.722030 systemd[1]: Started cri-containerd-d0501e0256930d6030924a03380fcc919a0d27df5ab59d4d8e43f12c92977571.scope - libcontainer container d0501e0256930d6030924a03380fcc919a0d27df5ab59d4d8e43f12c92977571. Apr 23 23:14:47.756973 containerd[1511]: time="2026-04-23T23:14:47.756937271Z" level=info msg="StartContainer for \"d0501e0256930d6030924a03380fcc919a0d27df5ab59d4d8e43f12c92977571\" returns successfully" Apr 23 23:14:47.771540 systemd[1]: cri-containerd-d0501e0256930d6030924a03380fcc919a0d27df5ab59d4d8e43f12c92977571.scope: Deactivated successfully. Apr 23 23:14:47.774190 containerd[1511]: time="2026-04-23T23:14:47.774044981Z" level=info msg="received container exit event container_id:\"d0501e0256930d6030924a03380fcc919a0d27df5ab59d4d8e43f12c92977571\" id:\"d0501e0256930d6030924a03380fcc919a0d27df5ab59d4d8e43f12c92977571\" pid:3179 exited_at:{seconds:1776986087 nanos:773491942}" Apr 23 23:14:47.796011 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0501e0256930d6030924a03380fcc919a0d27df5ab59d4d8e43f12c92977571-rootfs.mount: Deactivated successfully. Apr 23 23:14:47.988082 kubelet[2756]: I0423 23:14:47.987512 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-glqxj" podStartSLOduration=6.987497548 podStartE2EDuration="6.987497548s" podCreationTimestamp="2026-04-23 23:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 23:14:42.960913856 +0000 UTC m=+7.209025097" watchObservedRunningTime="2026-04-23 23:14:47.987497548 +0000 UTC m=+12.235608789" Apr 23 23:14:48.982950 containerd[1511]: time="2026-04-23T23:14:48.982910667Z" level=info msg="CreateContainer within sandbox \"4d87aba632fe564e490fa6e648b734efe7e65263436d6e2bbacd0a63393a80d1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 23 23:14:48.992500 containerd[1511]: time="2026-04-23T23:14:48.992460640Z" level=info msg="Container 8eff615b5b15cb6ac861b7fbc612308604acdb7172371af6ec54419c8cbe7a25: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:14:49.002567 containerd[1511]: time="2026-04-23T23:14:49.002490931Z" level=info msg="CreateContainer within sandbox \"4d87aba632fe564e490fa6e648b734efe7e65263436d6e2bbacd0a63393a80d1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8eff615b5b15cb6ac861b7fbc612308604acdb7172371af6ec54419c8cbe7a25\"" Apr 23 23:14:49.003446 containerd[1511]: time="2026-04-23T23:14:49.003395048Z" level=info msg="StartContainer for \"8eff615b5b15cb6ac861b7fbc612308604acdb7172371af6ec54419c8cbe7a25\"" Apr 23 23:14:49.004819 containerd[1511]: time="2026-04-23T23:14:49.004785444Z" level=info msg="connecting to shim 8eff615b5b15cb6ac861b7fbc612308604acdb7172371af6ec54419c8cbe7a25" address="unix:///run/containerd/s/d5ad06593cdc5b309d92f1873c578fce5a9829f739b060965a28722cea9e6960" protocol=ttrpc version=3 Apr 23 23:14:49.029062 systemd[1]: Started cri-containerd-8eff615b5b15cb6ac861b7fbc612308604acdb7172371af6ec54419c8cbe7a25.scope - libcontainer container 8eff615b5b15cb6ac861b7fbc612308604acdb7172371af6ec54419c8cbe7a25. Apr 23 23:14:49.069826 containerd[1511]: time="2026-04-23T23:14:49.069788780Z" level=info msg="StartContainer for \"8eff615b5b15cb6ac861b7fbc612308604acdb7172371af6ec54419c8cbe7a25\" returns successfully" Apr 23 23:14:49.086311 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 23 23:14:49.086628 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 23 23:14:49.087070 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 23 23:14:49.089663 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 23 23:14:49.091387 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 23 23:14:49.097340 systemd[1]: cri-containerd-8eff615b5b15cb6ac861b7fbc612308604acdb7172371af6ec54419c8cbe7a25.scope: Deactivated successfully. Apr 23 23:14:49.102385 containerd[1511]: time="2026-04-23T23:14:49.102339808Z" level=info msg="received container exit event container_id:\"8eff615b5b15cb6ac861b7fbc612308604acdb7172371af6ec54419c8cbe7a25\" id:\"8eff615b5b15cb6ac861b7fbc612308604acdb7172371af6ec54419c8cbe7a25\" pid:3226 exited_at:{seconds:1776986089 nanos:101261291}" Apr 23 23:14:49.122581 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 23 23:14:49.561830 containerd[1511]: time="2026-04-23T23:14:49.561762349Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:49.563172 containerd[1511]: time="2026-04-23T23:14:49.562988946Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Apr 23 23:14:49.564087 containerd[1511]: time="2026-04-23T23:14:49.564046903Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:49.566191 containerd[1511]: time="2026-04-23T23:14:49.566157737Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.89559117s" Apr 23 23:14:49.566333 containerd[1511]: time="2026-04-23T23:14:49.566310736Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Apr 23 23:14:49.571152 containerd[1511]: time="2026-04-23T23:14:49.571126403Z" level=info msg="CreateContainer within sandbox \"23890f37c491a649611c3131fd0b62333d0f1b4e6a6e2c3876b6bcc088fc4c74\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 23 23:14:49.579874 containerd[1511]: time="2026-04-23T23:14:49.579208980Z" level=info msg="Container 2e78a1f9ff90ad8da2af3d5fd9587f96b1a5ca0d77333a6760eb54aae9e5b009: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:14:49.586174 containerd[1511]: time="2026-04-23T23:14:49.586039600Z" level=info msg="CreateContainer within sandbox \"23890f37c491a649611c3131fd0b62333d0f1b4e6a6e2c3876b6bcc088fc4c74\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2e78a1f9ff90ad8da2af3d5fd9587f96b1a5ca0d77333a6760eb54aae9e5b009\"" Apr 23 23:14:49.587745 containerd[1511]: time="2026-04-23T23:14:49.587720236Z" level=info msg="StartContainer for \"2e78a1f9ff90ad8da2af3d5fd9587f96b1a5ca0d77333a6760eb54aae9e5b009\"" Apr 23 23:14:49.589957 containerd[1511]: time="2026-04-23T23:14:49.589655430Z" level=info msg="connecting to shim 2e78a1f9ff90ad8da2af3d5fd9587f96b1a5ca0d77333a6760eb54aae9e5b009" address="unix:///run/containerd/s/57b9180a6c008361f9ffbd8457c36030c72ad5940cfcd0b4a2d3195625b85bcf" protocol=ttrpc version=3 Apr 23 23:14:49.607946 systemd[1]: Started cri-containerd-2e78a1f9ff90ad8da2af3d5fd9587f96b1a5ca0d77333a6760eb54aae9e5b009.scope - libcontainer container 2e78a1f9ff90ad8da2af3d5fd9587f96b1a5ca0d77333a6760eb54aae9e5b009. Apr 23 23:14:49.646672 containerd[1511]: time="2026-04-23T23:14:49.646572749Z" level=info msg="StartContainer for \"2e78a1f9ff90ad8da2af3d5fd9587f96b1a5ca0d77333a6760eb54aae9e5b009\" returns successfully" Apr 23 23:14:49.988005 containerd[1511]: time="2026-04-23T23:14:49.987884784Z" level=info msg="CreateContainer within sandbox \"4d87aba632fe564e490fa6e648b734efe7e65263436d6e2bbacd0a63393a80d1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 23 23:14:49.995042 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8eff615b5b15cb6ac861b7fbc612308604acdb7172371af6ec54419c8cbe7a25-rootfs.mount: Deactivated successfully. Apr 23 23:14:50.005981 kubelet[2756]: I0423 23:14:50.005850 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-9kwwd" podStartSLOduration=1.8291215090000001 podStartE2EDuration="9.005833974s" podCreationTimestamp="2026-04-23 23:14:41 +0000 UTC" firstStartedPulling="2026-04-23 23:14:42.390474789 +0000 UTC m=+6.638585990" lastFinishedPulling="2026-04-23 23:14:49.567187214 +0000 UTC m=+13.815298455" observedRunningTime="2026-04-23 23:14:50.002697862 +0000 UTC m=+14.250809103" watchObservedRunningTime="2026-04-23 23:14:50.005833974 +0000 UTC m=+14.253945215" Apr 23 23:14:50.012730 containerd[1511]: time="2026-04-23T23:14:50.011775557Z" level=info msg="Container b94902ca617b14d7cdb0a9667970b21f001deb001ea82168fa3a92fd69ca94c2: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:14:50.022528 containerd[1511]: time="2026-04-23T23:14:50.022365088Z" level=info msg="CreateContainer within sandbox \"4d87aba632fe564e490fa6e648b734efe7e65263436d6e2bbacd0a63393a80d1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b94902ca617b14d7cdb0a9667970b21f001deb001ea82168fa3a92fd69ca94c2\"" Apr 23 23:14:50.024835 containerd[1511]: time="2026-04-23T23:14:50.024809321Z" level=info msg="StartContainer for \"b94902ca617b14d7cdb0a9667970b21f001deb001ea82168fa3a92fd69ca94c2\"" Apr 23 23:14:50.028732 containerd[1511]: time="2026-04-23T23:14:50.027394634Z" level=info msg="connecting to shim b94902ca617b14d7cdb0a9667970b21f001deb001ea82168fa3a92fd69ca94c2" address="unix:///run/containerd/s/d5ad06593cdc5b309d92f1873c578fce5a9829f739b060965a28722cea9e6960" protocol=ttrpc version=3 Apr 23 23:14:50.077883 systemd[1]: Started cri-containerd-b94902ca617b14d7cdb0a9667970b21f001deb001ea82168fa3a92fd69ca94c2.scope - libcontainer container b94902ca617b14d7cdb0a9667970b21f001deb001ea82168fa3a92fd69ca94c2. Apr 23 23:14:50.172996 containerd[1511]: time="2026-04-23T23:14:50.172959391Z" level=info msg="StartContainer for \"b94902ca617b14d7cdb0a9667970b21f001deb001ea82168fa3a92fd69ca94c2\" returns successfully" Apr 23 23:14:50.195686 systemd[1]: cri-containerd-b94902ca617b14d7cdb0a9667970b21f001deb001ea82168fa3a92fd69ca94c2.scope: Deactivated successfully. Apr 23 23:14:50.198514 containerd[1511]: time="2026-04-23T23:14:50.198482321Z" level=info msg="received container exit event container_id:\"b94902ca617b14d7cdb0a9667970b21f001deb001ea82168fa3a92fd69ca94c2\" id:\"b94902ca617b14d7cdb0a9667970b21f001deb001ea82168fa3a92fd69ca94c2\" pid:3322 exited_at:{seconds:1776986090 nanos:198018762}" Apr 23 23:14:50.232974 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b94902ca617b14d7cdb0a9667970b21f001deb001ea82168fa3a92fd69ca94c2-rootfs.mount: Deactivated successfully. Apr 23 23:14:50.996887 containerd[1511]: time="2026-04-23T23:14:50.996845832Z" level=info msg="CreateContainer within sandbox \"4d87aba632fe564e490fa6e648b734efe7e65263436d6e2bbacd0a63393a80d1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 23 23:14:51.010989 containerd[1511]: time="2026-04-23T23:14:51.010943354Z" level=info msg="Container 00bc0fab3f6b3a7a94421eccfb745f7a5071b9d98f408a31628805d4a913d097: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:14:51.022320 containerd[1511]: time="2026-04-23T23:14:51.021734084Z" level=info msg="CreateContainer within sandbox \"4d87aba632fe564e490fa6e648b734efe7e65263436d6e2bbacd0a63393a80d1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"00bc0fab3f6b3a7a94421eccfb745f7a5071b9d98f408a31628805d4a913d097\"" Apr 23 23:14:51.024734 containerd[1511]: time="2026-04-23T23:14:51.023637439Z" level=info msg="StartContainer for \"00bc0fab3f6b3a7a94421eccfb745f7a5071b9d98f408a31628805d4a913d097\"" Apr 23 23:14:51.024734 containerd[1511]: time="2026-04-23T23:14:51.024691076Z" level=info msg="connecting to shim 00bc0fab3f6b3a7a94421eccfb745f7a5071b9d98f408a31628805d4a913d097" address="unix:///run/containerd/s/d5ad06593cdc5b309d92f1873c578fce5a9829f739b060965a28722cea9e6960" protocol=ttrpc version=3 Apr 23 23:14:51.047897 systemd[1]: Started cri-containerd-00bc0fab3f6b3a7a94421eccfb745f7a5071b9d98f408a31628805d4a913d097.scope - libcontainer container 00bc0fab3f6b3a7a94421eccfb745f7a5071b9d98f408a31628805d4a913d097. Apr 23 23:14:51.077766 systemd[1]: cri-containerd-00bc0fab3f6b3a7a94421eccfb745f7a5071b9d98f408a31628805d4a913d097.scope: Deactivated successfully. Apr 23 23:14:51.082065 containerd[1511]: time="2026-04-23T23:14:51.081430403Z" level=info msg="received container exit event container_id:\"00bc0fab3f6b3a7a94421eccfb745f7a5071b9d98f408a31628805d4a913d097\" id:\"00bc0fab3f6b3a7a94421eccfb745f7a5071b9d98f408a31628805d4a913d097\" pid:3361 exited_at:{seconds:1776986091 nanos:79593928}" Apr 23 23:14:51.089387 containerd[1511]: time="2026-04-23T23:14:51.089347821Z" level=info msg="StartContainer for \"00bc0fab3f6b3a7a94421eccfb745f7a5071b9d98f408a31628805d4a913d097\" returns successfully" Apr 23 23:14:51.104359 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-00bc0fab3f6b3a7a94421eccfb745f7a5071b9d98f408a31628805d4a913d097-rootfs.mount: Deactivated successfully. Apr 23 23:14:52.007362 containerd[1511]: time="2026-04-23T23:14:52.007315655Z" level=info msg="CreateContainer within sandbox \"4d87aba632fe564e490fa6e648b734efe7e65263436d6e2bbacd0a63393a80d1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 23 23:14:52.025206 containerd[1511]: time="2026-04-23T23:14:52.023401932Z" level=info msg="Container a3935000aed77b61c3f9f2c586fc1e9d8b1c36126dc0b4de22cc09d034aabb2c: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:14:52.034411 containerd[1511]: time="2026-04-23T23:14:52.034362983Z" level=info msg="CreateContainer within sandbox \"4d87aba632fe564e490fa6e648b734efe7e65263436d6e2bbacd0a63393a80d1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a3935000aed77b61c3f9f2c586fc1e9d8b1c36126dc0b4de22cc09d034aabb2c\"" Apr 23 23:14:52.035143 containerd[1511]: time="2026-04-23T23:14:52.035109101Z" level=info msg="StartContainer for \"a3935000aed77b61c3f9f2c586fc1e9d8b1c36126dc0b4de22cc09d034aabb2c\"" Apr 23 23:14:52.036454 containerd[1511]: time="2026-04-23T23:14:52.036422538Z" level=info msg="connecting to shim a3935000aed77b61c3f9f2c586fc1e9d8b1c36126dc0b4de22cc09d034aabb2c" address="unix:///run/containerd/s/d5ad06593cdc5b309d92f1873c578fce5a9829f739b060965a28722cea9e6960" protocol=ttrpc version=3 Apr 23 23:14:52.071881 systemd[1]: Started cri-containerd-a3935000aed77b61c3f9f2c586fc1e9d8b1c36126dc0b4de22cc09d034aabb2c.scope - libcontainer container a3935000aed77b61c3f9f2c586fc1e9d8b1c36126dc0b4de22cc09d034aabb2c. Apr 23 23:14:52.137872 containerd[1511]: time="2026-04-23T23:14:52.137676469Z" level=info msg="StartContainer for \"a3935000aed77b61c3f9f2c586fc1e9d8b1c36126dc0b4de22cc09d034aabb2c\" returns successfully" Apr 23 23:14:52.283406 kubelet[2756]: I0423 23:14:52.283273 2756 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Apr 23 23:14:52.330141 systemd[1]: Created slice kubepods-burstable-pod2a485ec8_339e_4f5d_9880_b9036e22a2c4.slice - libcontainer container kubepods-burstable-pod2a485ec8_339e_4f5d_9880_b9036e22a2c4.slice. Apr 23 23:14:52.339596 systemd[1]: Created slice kubepods-burstable-pod92b2ea04_18b1_4639_bd3c_03bed3cc5b7b.slice - libcontainer container kubepods-burstable-pod92b2ea04_18b1_4639_bd3c_03bed3cc5b7b.slice. Apr 23 23:14:52.374891 kubelet[2756]: I0423 23:14:52.373982 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vh66\" (UniqueName: \"kubernetes.io/projected/2a485ec8-339e-4f5d-9880-b9036e22a2c4-kube-api-access-8vh66\") pod \"coredns-66bc5c9577-rmlk8\" (UID: \"2a485ec8-339e-4f5d-9880-b9036e22a2c4\") " pod="kube-system/coredns-66bc5c9577-rmlk8" Apr 23 23:14:52.374891 kubelet[2756]: I0423 23:14:52.374044 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92b2ea04-18b1-4639-bd3c-03bed3cc5b7b-config-volume\") pod \"coredns-66bc5c9577-zdrjl\" (UID: \"92b2ea04-18b1-4639-bd3c-03bed3cc5b7b\") " pod="kube-system/coredns-66bc5c9577-zdrjl" Apr 23 23:14:52.374891 kubelet[2756]: I0423 23:14:52.374069 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgr4f\" (UniqueName: \"kubernetes.io/projected/92b2ea04-18b1-4639-bd3c-03bed3cc5b7b-kube-api-access-wgr4f\") pod \"coredns-66bc5c9577-zdrjl\" (UID: \"92b2ea04-18b1-4639-bd3c-03bed3cc5b7b\") " pod="kube-system/coredns-66bc5c9577-zdrjl" Apr 23 23:14:52.374891 kubelet[2756]: I0423 23:14:52.374100 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a485ec8-339e-4f5d-9880-b9036e22a2c4-config-volume\") pod \"coredns-66bc5c9577-rmlk8\" (UID: \"2a485ec8-339e-4f5d-9880-b9036e22a2c4\") " pod="kube-system/coredns-66bc5c9577-rmlk8" Apr 23 23:14:52.639456 containerd[1511]: time="2026-04-23T23:14:52.639411337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rmlk8,Uid:2a485ec8-339e-4f5d-9880-b9036e22a2c4,Namespace:kube-system,Attempt:0,}" Apr 23 23:14:52.646147 containerd[1511]: time="2026-04-23T23:14:52.645791440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zdrjl,Uid:92b2ea04-18b1-4639-bd3c-03bed3cc5b7b,Namespace:kube-system,Attempt:0,}" Apr 23 23:14:53.037684 kubelet[2756]: I0423 23:14:53.036897 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4z52c" podStartSLOduration=6.62541763 podStartE2EDuration="12.036872603s" podCreationTimestamp="2026-04-23 23:14:41 +0000 UTC" firstStartedPulling="2026-04-23 23:14:42.25721672 +0000 UTC m=+6.505327961" lastFinishedPulling="2026-04-23 23:14:47.668671693 +0000 UTC m=+11.916782934" observedRunningTime="2026-04-23 23:14:53.036669764 +0000 UTC m=+17.284781005" watchObservedRunningTime="2026-04-23 23:14:53.036872603 +0000 UTC m=+17.284983884" Apr 23 23:14:54.306504 systemd-networkd[1430]: cilium_host: Link UP Apr 23 23:14:54.306617 systemd-networkd[1430]: cilium_net: Link UP Apr 23 23:14:54.308696 systemd-networkd[1430]: cilium_host: Gained carrier Apr 23 23:14:54.308940 systemd-networkd[1430]: cilium_net: Gained carrier Apr 23 23:14:54.389840 systemd-networkd[1430]: cilium_host: Gained IPv6LL Apr 23 23:14:54.421433 systemd-networkd[1430]: cilium_vxlan: Link UP Apr 23 23:14:54.421443 systemd-networkd[1430]: cilium_vxlan: Gained carrier Apr 23 23:14:54.599083 systemd-networkd[1430]: cilium_net: Gained IPv6LL Apr 23 23:14:54.703768 kernel: NET: Registered PF_ALG protocol family Apr 23 23:14:55.400586 systemd-networkd[1430]: lxc_health: Link UP Apr 23 23:14:55.409329 systemd-networkd[1430]: lxc_health: Gained carrier Apr 23 23:14:55.698681 kernel: eth0: renamed from tmp3a43c Apr 23 23:14:55.698265 systemd-networkd[1430]: lxcb0c43b532926: Link UP Apr 23 23:14:55.698445 systemd-networkd[1430]: lxc6c636cfc9262: Link UP Apr 23 23:14:55.703791 kernel: eth0: renamed from tmp9228a Apr 23 23:14:55.703789 systemd-networkd[1430]: lxcb0c43b532926: Gained carrier Apr 23 23:14:55.706317 systemd-networkd[1430]: lxc6c636cfc9262: Gained carrier Apr 23 23:14:56.397922 systemd-networkd[1430]: cilium_vxlan: Gained IPv6LL Apr 23 23:14:57.229897 systemd-networkd[1430]: lxcb0c43b532926: Gained IPv6LL Apr 23 23:14:57.293834 systemd-networkd[1430]: lxc6c636cfc9262: Gained IPv6LL Apr 23 23:14:57.358922 systemd-networkd[1430]: lxc_health: Gained IPv6LL Apr 23 23:14:59.603467 containerd[1511]: time="2026-04-23T23:14:59.603331703Z" level=info msg="connecting to shim 3a43c5cd61eb3821972ab20e05232161b2cd960bc70a852ce565ca0297b6c07d" address="unix:///run/containerd/s/f0b527e68e5ce6d6985447e59496170d318d8943089607b00267498d9cf1aa05" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:14:59.613900 containerd[1511]: time="2026-04-23T23:14:59.613834118Z" level=info msg="connecting to shim 9228adb9300f58df477c956843704b5b5afda5ef7c103b3dd24eaa8ea3551a11" address="unix:///run/containerd/s/6b6c34339d88df0f7e506e2f2cb233ca694ba2070ae4b3979f250ca27a4a7c8f" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:14:59.655997 systemd[1]: Started cri-containerd-3a43c5cd61eb3821972ab20e05232161b2cd960bc70a852ce565ca0297b6c07d.scope - libcontainer container 3a43c5cd61eb3821972ab20e05232161b2cd960bc70a852ce565ca0297b6c07d. Apr 23 23:14:59.662850 systemd[1]: Started cri-containerd-9228adb9300f58df477c956843704b5b5afda5ef7c103b3dd24eaa8ea3551a11.scope - libcontainer container 9228adb9300f58df477c956843704b5b5afda5ef7c103b3dd24eaa8ea3551a11. Apr 23 23:14:59.705852 containerd[1511]: time="2026-04-23T23:14:59.705803061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zdrjl,Uid:92b2ea04-18b1-4639-bd3c-03bed3cc5b7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a43c5cd61eb3821972ab20e05232161b2cd960bc70a852ce565ca0297b6c07d\"" Apr 23 23:14:59.715616 containerd[1511]: time="2026-04-23T23:14:59.715551638Z" level=info msg="CreateContainer within sandbox \"3a43c5cd61eb3821972ab20e05232161b2cd960bc70a852ce565ca0297b6c07d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 23 23:14:59.740032 containerd[1511]: time="2026-04-23T23:14:59.739947060Z" level=info msg="Container 482a815e7932d7c03ca862cbc3bfb0fc588e7829799b7fbe67c1024ea91cc060: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:14:59.749807 containerd[1511]: time="2026-04-23T23:14:59.749669797Z" level=info msg="CreateContainer within sandbox \"3a43c5cd61eb3821972ab20e05232161b2cd960bc70a852ce565ca0297b6c07d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"482a815e7932d7c03ca862cbc3bfb0fc588e7829799b7fbe67c1024ea91cc060\"" Apr 23 23:14:59.750404 containerd[1511]: time="2026-04-23T23:14:59.750299276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rmlk8,Uid:2a485ec8-339e-4f5d-9880-b9036e22a2c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"9228adb9300f58df477c956843704b5b5afda5ef7c103b3dd24eaa8ea3551a11\"" Apr 23 23:14:59.751173 containerd[1511]: time="2026-04-23T23:14:59.751137074Z" level=info msg="StartContainer for \"482a815e7932d7c03ca862cbc3bfb0fc588e7829799b7fbe67c1024ea91cc060\"" Apr 23 23:14:59.752542 containerd[1511]: time="2026-04-23T23:14:59.752503031Z" level=info msg="connecting to shim 482a815e7932d7c03ca862cbc3bfb0fc588e7829799b7fbe67c1024ea91cc060" address="unix:///run/containerd/s/f0b527e68e5ce6d6985447e59496170d318d8943089607b00267498d9cf1aa05" protocol=ttrpc version=3 Apr 23 23:14:59.759718 containerd[1511]: time="2026-04-23T23:14:59.759427294Z" level=info msg="CreateContainer within sandbox \"9228adb9300f58df477c956843704b5b5afda5ef7c103b3dd24eaa8ea3551a11\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 23 23:14:59.774332 containerd[1511]: time="2026-04-23T23:14:59.774277979Z" level=info msg="Container 7bc5478f816c243753aaf50586963fe3384f92aba8f1b85366d7b9277db32603: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:14:59.779892 systemd[1]: Started cri-containerd-482a815e7932d7c03ca862cbc3bfb0fc588e7829799b7fbe67c1024ea91cc060.scope - libcontainer container 482a815e7932d7c03ca862cbc3bfb0fc588e7829799b7fbe67c1024ea91cc060. Apr 23 23:14:59.785226 containerd[1511]: time="2026-04-23T23:14:59.785118274Z" level=info msg="CreateContainer within sandbox \"9228adb9300f58df477c956843704b5b5afda5ef7c103b3dd24eaa8ea3551a11\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7bc5478f816c243753aaf50586963fe3384f92aba8f1b85366d7b9277db32603\"" Apr 23 23:14:59.787228 containerd[1511]: time="2026-04-23T23:14:59.787189069Z" level=info msg="StartContainer for \"7bc5478f816c243753aaf50586963fe3384f92aba8f1b85366d7b9277db32603\"" Apr 23 23:14:59.789405 containerd[1511]: time="2026-04-23T23:14:59.789270184Z" level=info msg="connecting to shim 7bc5478f816c243753aaf50586963fe3384f92aba8f1b85366d7b9277db32603" address="unix:///run/containerd/s/6b6c34339d88df0f7e506e2f2cb233ca694ba2070ae4b3979f250ca27a4a7c8f" protocol=ttrpc version=3 Apr 23 23:14:59.816955 systemd[1]: Started cri-containerd-7bc5478f816c243753aaf50586963fe3384f92aba8f1b85366d7b9277db32603.scope - libcontainer container 7bc5478f816c243753aaf50586963fe3384f92aba8f1b85366d7b9277db32603. Apr 23 23:14:59.824757 containerd[1511]: time="2026-04-23T23:14:59.824662380Z" level=info msg="StartContainer for \"482a815e7932d7c03ca862cbc3bfb0fc588e7829799b7fbe67c1024ea91cc060\" returns successfully" Apr 23 23:14:59.860920 containerd[1511]: time="2026-04-23T23:14:59.860794175Z" level=info msg="StartContainer for \"7bc5478f816c243753aaf50586963fe3384f92aba8f1b85366d7b9277db32603\" returns successfully" Apr 23 23:15:00.056328 kubelet[2756]: I0423 23:15:00.056251 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-rmlk8" podStartSLOduration=19.056226235 podStartE2EDuration="19.056226235s" podCreationTimestamp="2026-04-23 23:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 23:15:00.055338598 +0000 UTC m=+24.303449879" watchObservedRunningTime="2026-04-23 23:15:00.056226235 +0000 UTC m=+24.304337476" Apr 23 23:15:00.588230 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2816142453.mount: Deactivated successfully. Apr 23 23:15:06.394535 kubelet[2756]: I0423 23:15:06.394403 2756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 23 23:15:06.421726 kubelet[2756]: I0423 23:15:06.420437 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-zdrjl" podStartSLOduration=25.420421097 podStartE2EDuration="25.420421097s" podCreationTimestamp="2026-04-23 23:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 23:15:00.097740379 +0000 UTC m=+24.345851620" watchObservedRunningTime="2026-04-23 23:15:06.420421097 +0000 UTC m=+30.668532338" Apr 23 23:16:47.602052 systemd[1]: Started sshd@7-138.199.150.149:22-50.85.169.122:43222.service - OpenSSH per-connection server daemon (50.85.169.122:43222). Apr 23 23:16:47.724815 sshd[4091]: Accepted publickey for core from 50.85.169.122 port 43222 ssh2: RSA SHA256:Tz0dqMPsdf8xUb4jUaTJqqr7RT+Ihh1eVJlUIJQ/qIM Apr 23 23:16:47.726925 sshd-session[4091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:16:47.732555 systemd-logind[1483]: New session 8 of user core. Apr 23 23:16:47.738930 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 23 23:16:47.868748 sshd[4094]: Connection closed by 50.85.169.122 port 43222 Apr 23 23:16:47.869865 sshd-session[4091]: pam_unix(sshd:session): session closed for user core Apr 23 23:16:47.875748 systemd[1]: sshd@7-138.199.150.149:22-50.85.169.122:43222.service: Deactivated successfully. Apr 23 23:16:47.878789 systemd[1]: session-8.scope: Deactivated successfully. Apr 23 23:16:47.880020 systemd-logind[1483]: Session 8 logged out. Waiting for processes to exit. Apr 23 23:16:47.882980 systemd-logind[1483]: Removed session 8. Apr 23 23:16:52.901037 systemd[1]: Started sshd@8-138.199.150.149:22-50.85.169.122:54964.service - OpenSSH per-connection server daemon (50.85.169.122:54964). Apr 23 23:16:53.031182 sshd[4106]: Accepted publickey for core from 50.85.169.122 port 54964 ssh2: RSA SHA256:Tz0dqMPsdf8xUb4jUaTJqqr7RT+Ihh1eVJlUIJQ/qIM Apr 23 23:16:53.033811 sshd-session[4106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:16:53.039889 systemd-logind[1483]: New session 9 of user core. Apr 23 23:16:53.049104 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 23 23:16:53.167310 sshd[4109]: Connection closed by 50.85.169.122 port 54964 Apr 23 23:16:53.168511 sshd-session[4106]: pam_unix(sshd:session): session closed for user core Apr 23 23:16:53.174782 systemd[1]: sshd@8-138.199.150.149:22-50.85.169.122:54964.service: Deactivated successfully. Apr 23 23:16:53.178538 systemd[1]: session-9.scope: Deactivated successfully. Apr 23 23:16:53.180229 systemd-logind[1483]: Session 9 logged out. Waiting for processes to exit. Apr 23 23:16:53.182240 systemd-logind[1483]: Removed session 9. Apr 23 23:16:58.196078 systemd[1]: Started sshd@9-138.199.150.149:22-50.85.169.122:54980.service - OpenSSH per-connection server daemon (50.85.169.122:54980). Apr 23 23:16:58.320107 sshd[4122]: Accepted publickey for core from 50.85.169.122 port 54980 ssh2: RSA SHA256:Tz0dqMPsdf8xUb4jUaTJqqr7RT+Ihh1eVJlUIJQ/qIM Apr 23 23:16:58.323303 sshd-session[4122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:16:58.328312 systemd-logind[1483]: New session 10 of user core. Apr 23 23:16:58.334922 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 23 23:16:58.454769 sshd[4125]: Connection closed by 50.85.169.122 port 54980 Apr 23 23:16:58.455805 sshd-session[4122]: pam_unix(sshd:session): session closed for user core Apr 23 23:16:58.463872 systemd[1]: sshd@9-138.199.150.149:22-50.85.169.122:54980.service: Deactivated successfully. Apr 23 23:16:58.469595 systemd[1]: session-10.scope: Deactivated successfully. Apr 23 23:16:58.472680 systemd-logind[1483]: Session 10 logged out. Waiting for processes to exit. Apr 23 23:16:58.475598 systemd-logind[1483]: Removed session 10. Apr 23 23:17:03.489378 systemd[1]: Started sshd@10-138.199.150.149:22-50.85.169.122:40600.service - OpenSSH per-connection server daemon (50.85.169.122:40600). Apr 23 23:17:03.621455 sshd[4138]: Accepted publickey for core from 50.85.169.122 port 40600 ssh2: RSA SHA256:Tz0dqMPsdf8xUb4jUaTJqqr7RT+Ihh1eVJlUIJQ/qIM Apr 23 23:17:03.625929 sshd-session[4138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:17:03.633379 systemd-logind[1483]: New session 11 of user core. Apr 23 23:17:03.639101 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 23 23:17:03.762658 sshd[4141]: Connection closed by 50.85.169.122 port 40600 Apr 23 23:17:03.764021 sshd-session[4138]: pam_unix(sshd:session): session closed for user core Apr 23 23:17:03.769777 systemd-logind[1483]: Session 11 logged out. Waiting for processes to exit. Apr 23 23:17:03.770891 systemd[1]: sshd@10-138.199.150.149:22-50.85.169.122:40600.service: Deactivated successfully. Apr 23 23:17:03.774598 systemd[1]: session-11.scope: Deactivated successfully. Apr 23 23:17:03.776618 systemd-logind[1483]: Removed session 11. Apr 23 23:17:03.790565 systemd[1]: Started sshd@11-138.199.150.149:22-50.85.169.122:40606.service - OpenSSH per-connection server daemon (50.85.169.122:40606). Apr 23 23:17:03.927206 sshd[4154]: Accepted publickey for core from 50.85.169.122 port 40606 ssh2: RSA SHA256:Tz0dqMPsdf8xUb4jUaTJqqr7RT+Ihh1eVJlUIJQ/qIM Apr 23 23:17:03.929688 sshd-session[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:17:03.937783 systemd-logind[1483]: New session 12 of user core. Apr 23 23:17:03.942998 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 23 23:17:04.109917 sshd[4157]: Connection closed by 50.85.169.122 port 40606 Apr 23 23:17:04.111117 sshd-session[4154]: pam_unix(sshd:session): session closed for user core Apr 23 23:17:04.119531 systemd-logind[1483]: Session 12 logged out. Waiting for processes to exit. Apr 23 23:17:04.120534 systemd[1]: sshd@11-138.199.150.149:22-50.85.169.122:40606.service: Deactivated successfully. Apr 23 23:17:04.125955 systemd[1]: session-12.scope: Deactivated successfully. Apr 23 23:17:04.142927 systemd[1]: Started sshd@12-138.199.150.149:22-50.85.169.122:40610.service - OpenSSH per-connection server daemon (50.85.169.122:40610). Apr 23 23:17:04.143208 systemd-logind[1483]: Removed session 12. Apr 23 23:17:04.273515 sshd[4167]: Accepted publickey for core from 50.85.169.122 port 40610 ssh2: RSA SHA256:Tz0dqMPsdf8xUb4jUaTJqqr7RT+Ihh1eVJlUIJQ/qIM Apr 23 23:17:04.275949 sshd-session[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:17:04.281279 systemd-logind[1483]: New session 13 of user core. Apr 23 23:17:04.285907 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 23 23:17:04.408405 sshd[4170]: Connection closed by 50.85.169.122 port 40610 Apr 23 23:17:04.411044 sshd-session[4167]: pam_unix(sshd:session): session closed for user core Apr 23 23:17:04.416995 systemd[1]: sshd@12-138.199.150.149:22-50.85.169.122:40610.service: Deactivated successfully. Apr 23 23:17:04.421383 systemd[1]: session-13.scope: Deactivated successfully. Apr 23 23:17:04.423254 systemd-logind[1483]: Session 13 logged out. Waiting for processes to exit. Apr 23 23:17:04.424643 systemd-logind[1483]: Removed session 13. Apr 23 23:17:09.440838 systemd[1]: Started sshd@13-138.199.150.149:22-50.85.169.122:41290.service - OpenSSH per-connection server daemon (50.85.169.122:41290). Apr 23 23:17:09.596048 sshd[4182]: Accepted publickey for core from 50.85.169.122 port 41290 ssh2: RSA SHA256:Tz0dqMPsdf8xUb4jUaTJqqr7RT+Ihh1eVJlUIJQ/qIM Apr 23 23:17:09.598278 sshd-session[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:17:09.603107 systemd-logind[1483]: New session 14 of user core. Apr 23 23:17:09.612123 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 23 23:17:09.736833 sshd[4185]: Connection closed by 50.85.169.122 port 41290 Apr 23 23:17:09.735812 sshd-session[4182]: pam_unix(sshd:session): session closed for user core Apr 23 23:17:09.741883 systemd[1]: sshd@13-138.199.150.149:22-50.85.169.122:41290.service: Deactivated successfully. Apr 23 23:17:09.744378 systemd[1]: session-14.scope: Deactivated successfully. Apr 23 23:17:09.745440 systemd-logind[1483]: Session 14 logged out. Waiting for processes to exit. Apr 23 23:17:09.747135 systemd-logind[1483]: Removed session 14. Apr 23 23:17:14.764010 systemd[1]: Started sshd@14-138.199.150.149:22-50.85.169.122:41292.service - OpenSSH per-connection server daemon (50.85.169.122:41292). Apr 23 23:17:14.896446 sshd[4199]: Accepted publickey for core from 50.85.169.122 port 41292 ssh2: RSA SHA256:Tz0dqMPsdf8xUb4jUaTJqqr7RT+Ihh1eVJlUIJQ/qIM Apr 23 23:17:14.899859 sshd-session[4199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:17:14.907755 systemd-logind[1483]: New session 15 of user core. Apr 23 23:17:14.912981 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 23 23:17:15.028725 sshd[4202]: Connection closed by 50.85.169.122 port 41292 Apr 23 23:17:15.029985 sshd-session[4199]: pam_unix(sshd:session): session closed for user core Apr 23 23:17:15.037058 systemd-logind[1483]: Session 15 logged out. Waiting for processes to exit. Apr 23 23:17:15.037821 systemd[1]: sshd@14-138.199.150.149:22-50.85.169.122:41292.service: Deactivated successfully. Apr 23 23:17:15.042065 systemd[1]: session-15.scope: Deactivated successfully. Apr 23 23:17:15.055932 systemd[1]: Started sshd@15-138.199.150.149:22-50.85.169.122:41298.service - OpenSSH per-connection server daemon (50.85.169.122:41298). Apr 23 23:17:15.056260 systemd-logind[1483]: Removed session 15. Apr 23 23:17:15.185270 sshd[4214]: Accepted publickey for core from 50.85.169.122 port 41298 ssh2: RSA SHA256:Tz0dqMPsdf8xUb4jUaTJqqr7RT+Ihh1eVJlUIJQ/qIM Apr 23 23:17:15.188115 sshd-session[4214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:17:15.193975 systemd-logind[1483]: New session 16 of user core. Apr 23 23:17:15.201109 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 23 23:17:15.407794 sshd[4217]: Connection closed by 50.85.169.122 port 41298 Apr 23 23:17:15.408220 sshd-session[4214]: pam_unix(sshd:session): session closed for user core Apr 23 23:17:15.413340 systemd[1]: sshd@15-138.199.150.149:22-50.85.169.122:41298.service: Deactivated successfully. Apr 23 23:17:15.417994 systemd[1]: session-16.scope: Deactivated successfully. Apr 23 23:17:15.420970 systemd-logind[1483]: Session 16 logged out. Waiting for processes to exit. Apr 23 23:17:15.440298 systemd[1]: Started sshd@16-138.199.150.149:22-50.85.169.122:41308.service - OpenSSH per-connection server daemon (50.85.169.122:41308). Apr 23 23:17:15.441632 systemd-logind[1483]: Removed session 16. Apr 23 23:17:15.571286 sshd[4226]: Accepted publickey for core from 50.85.169.122 port 41308 ssh2: RSA SHA256:Tz0dqMPsdf8xUb4jUaTJqqr7RT+Ihh1eVJlUIJQ/qIM Apr 23 23:17:15.573225 sshd-session[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:17:15.577764 systemd-logind[1483]: New session 17 of user core. Apr 23 23:17:15.585064 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 23 23:17:16.157036 sshd[4229]: Connection closed by 50.85.169.122 port 41308 Apr 23 23:17:16.161001 sshd-session[4226]: pam_unix(sshd:session): session closed for user core Apr 23 23:17:16.166406 systemd[1]: session-17.scope: Deactivated successfully. Apr 23 23:17:16.169108 systemd[1]: sshd@16-138.199.150.149:22-50.85.169.122:41308.service: Deactivated successfully. Apr 23 23:17:16.174168 systemd-logind[1483]: Session 17 logged out. Waiting for processes to exit. Apr 23 23:17:16.186976 systemd[1]: Started sshd@17-138.199.150.149:22-50.85.169.122:41314.service - OpenSSH per-connection server daemon (50.85.169.122:41314). Apr 23 23:17:16.188215 systemd-logind[1483]: Removed session 17. Apr 23 23:17:16.311274 sshd[4244]: Accepted publickey for core from 50.85.169.122 port 41314 ssh2: RSA SHA256:Tz0dqMPsdf8xUb4jUaTJqqr7RT+Ihh1eVJlUIJQ/qIM Apr 23 23:17:16.314514 sshd-session[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:17:16.322644 systemd-logind[1483]: New session 18 of user core. Apr 23 23:17:16.325884 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 23 23:17:16.577449 sshd[4247]: Connection closed by 50.85.169.122 port 41314 Apr 23 23:17:16.577906 sshd-session[4244]: pam_unix(sshd:session): session closed for user core Apr 23 23:17:16.586133 systemd[1]: sshd@17-138.199.150.149:22-50.85.169.122:41314.service: Deactivated successfully. Apr 23 23:17:16.586173 systemd-logind[1483]: Session 18 logged out. Waiting for processes to exit. Apr 23 23:17:16.590767 systemd[1]: session-18.scope: Deactivated successfully. Apr 23 23:17:16.593986 systemd-logind[1483]: Removed session 18. Apr 23 23:17:16.605506 systemd[1]: Started sshd@18-138.199.150.149:22-50.85.169.122:41326.service - OpenSSH per-connection server daemon (50.85.169.122:41326). Apr 23 23:17:16.737311 sshd[4257]: Accepted publickey for core from 50.85.169.122 port 41326 ssh2: RSA SHA256:Tz0dqMPsdf8xUb4jUaTJqqr7RT+Ihh1eVJlUIJQ/qIM Apr 23 23:17:16.740027 sshd-session[4257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:17:16.745421 systemd-logind[1483]: New session 19 of user core. Apr 23 23:17:16.751029 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 23 23:17:16.865867 sshd[4260]: Connection closed by 50.85.169.122 port 41326 Apr 23 23:17:16.867885 sshd-session[4257]: pam_unix(sshd:session): session closed for user core Apr 23 23:17:16.873169 systemd[1]: sshd@18-138.199.150.149:22-50.85.169.122:41326.service: Deactivated successfully. Apr 23 23:17:16.876323 systemd[1]: session-19.scope: Deactivated successfully. Apr 23 23:17:16.878007 systemd-logind[1483]: Session 19 logged out. Waiting for processes to exit. Apr 23 23:17:16.881227 systemd-logind[1483]: Removed session 19. Apr 23 23:17:21.893479 systemd[1]: Started sshd@19-138.199.150.149:22-50.85.169.122:35614.service - OpenSSH per-connection server daemon (50.85.169.122:35614). Apr 23 23:17:22.026379 sshd[4276]: Accepted publickey for core from 50.85.169.122 port 35614 ssh2: RSA SHA256:Tz0dqMPsdf8xUb4jUaTJqqr7RT+Ihh1eVJlUIJQ/qIM Apr 23 23:17:22.028486 sshd-session[4276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:17:22.034152 systemd-logind[1483]: New session 20 of user core. Apr 23 23:17:22.047116 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 23 23:17:22.158908 sshd[4279]: Connection closed by 50.85.169.122 port 35614 Apr 23 23:17:22.159958 sshd-session[4276]: pam_unix(sshd:session): session closed for user core Apr 23 23:17:22.165724 systemd[1]: sshd@19-138.199.150.149:22-50.85.169.122:35614.service: Deactivated successfully. Apr 23 23:17:22.168587 systemd[1]: session-20.scope: Deactivated successfully. Apr 23 23:17:22.169897 systemd-logind[1483]: Session 20 logged out. Waiting for processes to exit. Apr 23 23:17:22.171789 systemd-logind[1483]: Removed session 20. Apr 23 23:17:27.187956 systemd[1]: Started sshd@20-138.199.150.149:22-50.85.169.122:35630.service - OpenSSH per-connection server daemon (50.85.169.122:35630). Apr 23 23:17:27.307176 sshd[4291]: Accepted publickey for core from 50.85.169.122 port 35630 ssh2: RSA SHA256:Tz0dqMPsdf8xUb4jUaTJqqr7RT+Ihh1eVJlUIJQ/qIM Apr 23 23:17:27.309155 sshd-session[4291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:17:27.315919 systemd-logind[1483]: New session 21 of user core. Apr 23 23:17:27.321938 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 23 23:17:27.434152 sshd[4294]: Connection closed by 50.85.169.122 port 35630 Apr 23 23:17:27.434979 sshd-session[4291]: pam_unix(sshd:session): session closed for user core Apr 23 23:17:27.441640 systemd[1]: sshd@20-138.199.150.149:22-50.85.169.122:35630.service: Deactivated successfully. Apr 23 23:17:27.442176 systemd-logind[1483]: Session 21 logged out. Waiting for processes to exit. Apr 23 23:17:27.444222 systemd[1]: session-21.scope: Deactivated successfully. Apr 23 23:17:27.446418 systemd-logind[1483]: Removed session 21. Apr 23 23:17:32.463756 systemd[1]: Started sshd@21-138.199.150.149:22-50.85.169.122:58822.service - OpenSSH per-connection server daemon (50.85.169.122:58822). Apr 23 23:17:32.598474 sshd[4305]: Accepted publickey for core from 50.85.169.122 port 58822 ssh2: RSA SHA256:Tz0dqMPsdf8xUb4jUaTJqqr7RT+Ihh1eVJlUIJQ/qIM Apr 23 23:17:32.600539 sshd-session[4305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:17:32.606541 systemd-logind[1483]: New session 22 of user core. Apr 23 23:17:32.617106 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 23 23:17:32.735299 sshd[4308]: Connection closed by 50.85.169.122 port 58822 Apr 23 23:17:32.736119 sshd-session[4305]: pam_unix(sshd:session): session closed for user core Apr 23 23:17:32.742599 systemd[1]: sshd@21-138.199.150.149:22-50.85.169.122:58822.service: Deactivated successfully. Apr 23 23:17:32.746362 systemd[1]: session-22.scope: Deactivated successfully. Apr 23 23:17:32.750171 systemd-logind[1483]: Session 22 logged out. Waiting for processes to exit. Apr 23 23:17:32.751607 systemd-logind[1483]: Removed session 22. Apr 23 23:17:32.764176 systemd[1]: Started sshd@22-138.199.150.149:22-50.85.169.122:58834.service - OpenSSH per-connection server daemon (50.85.169.122:58834). Apr 23 23:17:32.901749 sshd[4320]: Accepted publickey for core from 50.85.169.122 port 58834 ssh2: RSA SHA256:Tz0dqMPsdf8xUb4jUaTJqqr7RT+Ihh1eVJlUIJQ/qIM Apr 23 23:17:32.904162 sshd-session[4320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:17:32.909771 systemd-logind[1483]: New session 23 of user core. Apr 23 23:17:32.921091 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 23 23:17:34.682795 containerd[1511]: time="2026-04-23T23:17:34.682740144Z" level=info msg="StopContainer for \"2e78a1f9ff90ad8da2af3d5fd9587f96b1a5ca0d77333a6760eb54aae9e5b009\" with timeout 30 (s)" Apr 23 23:17:34.685068 containerd[1511]: time="2026-04-23T23:17:34.685033003Z" level=info msg="Stop container \"2e78a1f9ff90ad8da2af3d5fd9587f96b1a5ca0d77333a6760eb54aae9e5b009\" with signal terminated" Apr 23 23:17:34.713425 containerd[1511]: time="2026-04-23T23:17:34.713366475Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 23 23:17:34.721930 systemd[1]: cri-containerd-2e78a1f9ff90ad8da2af3d5fd9587f96b1a5ca0d77333a6760eb54aae9e5b009.scope: Deactivated successfully. Apr 23 23:17:34.726404 containerd[1511]: time="2026-04-23T23:17:34.726368821Z" level=info msg="StopContainer for \"a3935000aed77b61c3f9f2c586fc1e9d8b1c36126dc0b4de22cc09d034aabb2c\" with timeout 2 (s)" Apr 23 23:17:34.726684 containerd[1511]: time="2026-04-23T23:17:34.726508862Z" level=info msg="received container exit event container_id:\"2e78a1f9ff90ad8da2af3d5fd9587f96b1a5ca0d77333a6760eb54aae9e5b009\" id:\"2e78a1f9ff90ad8da2af3d5fd9587f96b1a5ca0d77333a6760eb54aae9e5b009\" pid:3287 exited_at:{seconds:1776986254 nanos:725508014}" Apr 23 23:17:34.727250 containerd[1511]: time="2026-04-23T23:17:34.727153308Z" level=info msg="Stop container \"a3935000aed77b61c3f9f2c586fc1e9d8b1c36126dc0b4de22cc09d034aabb2c\" with signal terminated" Apr 23 23:17:34.741462 systemd-networkd[1430]: lxc_health: Link DOWN Apr 23 23:17:34.741470 systemd-networkd[1430]: lxc_health: Lost carrier Apr 23 23:17:34.764270 systemd[1]: cri-containerd-a3935000aed77b61c3f9f2c586fc1e9d8b1c36126dc0b4de22cc09d034aabb2c.scope: Deactivated successfully. Apr 23 23:17:34.765358 systemd[1]: cri-containerd-a3935000aed77b61c3f9f2c586fc1e9d8b1c36126dc0b4de22cc09d034aabb2c.scope: Consumed 7.073s CPU time, 127M memory peak, 128K read from disk, 12.9M written to disk. Apr 23 23:17:34.771059 containerd[1511]: time="2026-04-23T23:17:34.770526943Z" level=info msg="received container exit event container_id:\"a3935000aed77b61c3f9f2c586fc1e9d8b1c36126dc0b4de22cc09d034aabb2c\" id:\"a3935000aed77b61c3f9f2c586fc1e9d8b1c36126dc0b4de22cc09d034aabb2c\" pid:3397 exited_at:{seconds:1776986254 nanos:769960018}" Apr 23 23:17:34.787940 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e78a1f9ff90ad8da2af3d5fd9587f96b1a5ca0d77333a6760eb54aae9e5b009-rootfs.mount: Deactivated successfully. Apr 23 23:17:34.804613 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3935000aed77b61c3f9f2c586fc1e9d8b1c36126dc0b4de22cc09d034aabb2c-rootfs.mount: Deactivated successfully. Apr 23 23:17:34.806437 containerd[1511]: time="2026-04-23T23:17:34.806304476Z" level=info msg="StopContainer for \"2e78a1f9ff90ad8da2af3d5fd9587f96b1a5ca0d77333a6760eb54aae9e5b009\" returns successfully" Apr 23 23:17:34.807740 containerd[1511]: time="2026-04-23T23:17:34.807267804Z" level=info msg="StopPodSandbox for \"23890f37c491a649611c3131fd0b62333d0f1b4e6a6e2c3876b6bcc088fc4c74\"" Apr 23 23:17:34.807740 containerd[1511]: time="2026-04-23T23:17:34.807364044Z" level=info msg="Container to stop \"2e78a1f9ff90ad8da2af3d5fd9587f96b1a5ca0d77333a6760eb54aae9e5b009\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 23 23:17:34.817417 containerd[1511]: time="2026-04-23T23:17:34.817352006Z" level=info msg="StopContainer for \"a3935000aed77b61c3f9f2c586fc1e9d8b1c36126dc0b4de22cc09d034aabb2c\" returns successfully" Apr 23 23:17:34.819273 containerd[1511]: time="2026-04-23T23:17:34.818645297Z" level=info msg="StopPodSandbox for \"4d87aba632fe564e490fa6e648b734efe7e65263436d6e2bbacd0a63393a80d1\"" Apr 23 23:17:34.819273 containerd[1511]: time="2026-04-23T23:17:34.818730178Z" level=info msg="Container to stop \"8eff615b5b15cb6ac861b7fbc612308604acdb7172371af6ec54419c8cbe7a25\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 23 23:17:34.819273 containerd[1511]: time="2026-04-23T23:17:34.818744818Z" level=info msg="Container to stop \"00bc0fab3f6b3a7a94421eccfb745f7a5071b9d98f408a31628805d4a913d097\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 23 23:17:34.819273 containerd[1511]: time="2026-04-23T23:17:34.818755738Z" level=info msg="Container to stop \"d0501e0256930d6030924a03380fcc919a0d27df5ab59d4d8e43f12c92977571\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 23 23:17:34.819273 containerd[1511]: time="2026-04-23T23:17:34.818765898Z" level=info msg="Container to stop \"b94902ca617b14d7cdb0a9667970b21f001deb001ea82168fa3a92fd69ca94c2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 23 23:17:34.819273 containerd[1511]: time="2026-04-23T23:17:34.818773738Z" level=info msg="Container to stop \"a3935000aed77b61c3f9f2c586fc1e9d8b1c36126dc0b4de22cc09d034aabb2c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 23 23:17:34.827109 systemd[1]: cri-containerd-23890f37c491a649611c3131fd0b62333d0f1b4e6a6e2c3876b6bcc088fc4c74.scope: Deactivated successfully. Apr 23 23:17:34.832057 systemd[1]: cri-containerd-4d87aba632fe564e490fa6e648b734efe7e65263436d6e2bbacd0a63393a80d1.scope: Deactivated successfully. Apr 23 23:17:34.833620 containerd[1511]: time="2026-04-23T23:17:34.833537979Z" level=info msg="received sandbox exit event container_id:\"23890f37c491a649611c3131fd0b62333d0f1b4e6a6e2c3876b6bcc088fc4c74\" id:\"23890f37c491a649611c3131fd0b62333d0f1b4e6a6e2c3876b6bcc088fc4c74\" exit_status:137 exited_at:{seconds:1776986254 nanos:831730084}" monitor_name=podsandbox Apr 23 23:17:34.837035 containerd[1511]: time="2026-04-23T23:17:34.836937087Z" level=info msg="received sandbox exit event container_id:\"4d87aba632fe564e490fa6e648b734efe7e65263436d6e2bbacd0a63393a80d1\" id:\"4d87aba632fe564e490fa6e648b734efe7e65263436d6e2bbacd0a63393a80d1\" exit_status:137 exited_at:{seconds:1776986254 nanos:836103000}" monitor_name=podsandbox Apr 23 23:17:34.861412 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d87aba632fe564e490fa6e648b734efe7e65263436d6e2bbacd0a63393a80d1-rootfs.mount: Deactivated successfully. Apr 23 23:17:34.864463 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-23890f37c491a649611c3131fd0b62333d0f1b4e6a6e2c3876b6bcc088fc4c74-rootfs.mount: Deactivated successfully. Apr 23 23:17:34.868976 containerd[1511]: time="2026-04-23T23:17:34.868748787Z" level=info msg="shim disconnected" id=23890f37c491a649611c3131fd0b62333d0f1b4e6a6e2c3876b6bcc088fc4c74 namespace=k8s.io Apr 23 23:17:34.868976 containerd[1511]: time="2026-04-23T23:17:34.868784307Z" level=warning msg="cleaning up after shim disconnected" id=23890f37c491a649611c3131fd0b62333d0f1b4e6a6e2c3876b6bcc088fc4c74 namespace=k8s.io Apr 23 23:17:34.868976 containerd[1511]: time="2026-04-23T23:17:34.868817788Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 23 23:17:34.869698 containerd[1511]: time="2026-04-23T23:17:34.869611434Z" level=info msg="shim disconnected" id=4d87aba632fe564e490fa6e648b734efe7e65263436d6e2bbacd0a63393a80d1 namespace=k8s.io Apr 23 23:17:34.869698 containerd[1511]: time="2026-04-23T23:17:34.869648555Z" level=warning msg="cleaning up after shim disconnected" id=4d87aba632fe564e490fa6e648b734efe7e65263436d6e2bbacd0a63393a80d1 namespace=k8s.io Apr 23 23:17:34.869698 containerd[1511]: time="2026-04-23T23:17:34.869674955Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 23 23:17:34.888769 containerd[1511]: time="2026-04-23T23:17:34.888590110Z" level=info msg="received sandbox container exit event sandbox_id:\"4d87aba632fe564e490fa6e648b734efe7e65263436d6e2bbacd0a63393a80d1\" exit_status:137 exited_at:{seconds:1776986254 nanos:836103000}" monitor_name=criService Apr 23 23:17:34.889335 containerd[1511]: time="2026-04-23T23:17:34.889272195Z" level=info msg="received sandbox container exit event sandbox_id:\"23890f37c491a649611c3131fd0b62333d0f1b4e6a6e2c3876b6bcc088fc4c74\" exit_status:137 exited_at:{seconds:1776986254 nanos:831730084}" monitor_name=criService Apr 23 23:17:34.892031 containerd[1511]: time="2026-04-23T23:17:34.891824976Z" level=info msg="TearDown network for sandbox \"4d87aba632fe564e490fa6e648b734efe7e65263436d6e2bbacd0a63393a80d1\" successfully" Apr 23 23:17:34.892031 containerd[1511]: time="2026-04-23T23:17:34.891855976Z" level=info msg="StopPodSandbox for \"4d87aba632fe564e490fa6e648b734efe7e65263436d6e2bbacd0a63393a80d1\" returns successfully" Apr 23 23:17:34.892031 containerd[1511]: time="2026-04-23T23:17:34.891969217Z" level=info msg="TearDown network for sandbox \"23890f37c491a649611c3131fd0b62333d0f1b4e6a6e2c3876b6bcc088fc4c74\" successfully" Apr 23 23:17:34.892031 containerd[1511]: time="2026-04-23T23:17:34.891981297Z" level=info msg="StopPodSandbox for \"23890f37c491a649611c3131fd0b62333d0f1b4e6a6e2c3876b6bcc088fc4c74\" returns successfully" Apr 23 23:17:34.893475 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-23890f37c491a649611c3131fd0b62333d0f1b4e6a6e2c3876b6bcc088fc4c74-shm.mount: Deactivated successfully. Apr 23 23:17:34.999112 kubelet[2756]: I0423 23:17:34.998939 2756 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/27160d2e-7fb2-49bf-9ea8-dd843baea345-cni-path\") pod \"27160d2e-7fb2-49bf-9ea8-dd843baea345\" (UID: \"27160d2e-7fb2-49bf-9ea8-dd843baea345\") " Apr 23 23:17:34.999112 kubelet[2756]: I0423 23:17:34.999012 2756 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/27160d2e-7fb2-49bf-9ea8-dd843baea345-lib-modules\") pod \"27160d2e-7fb2-49bf-9ea8-dd843baea345\" (UID: \"27160d2e-7fb2-49bf-9ea8-dd843baea345\") " Apr 23 23:17:34.999112 kubelet[2756]: I0423 23:17:34.999049 2756 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zf4r\" (UniqueName: \"kubernetes.io/projected/27160d2e-7fb2-49bf-9ea8-dd843baea345-kube-api-access-8zf4r\") pod \"27160d2e-7fb2-49bf-9ea8-dd843baea345\" (UID: \"27160d2e-7fb2-49bf-9ea8-dd843baea345\") " Apr 23 23:17:34.999112 kubelet[2756]: I0423 23:17:34.999077 2756 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/27160d2e-7fb2-49bf-9ea8-dd843baea345-hostproc\") pod \"27160d2e-7fb2-49bf-9ea8-dd843baea345\" (UID: \"27160d2e-7fb2-49bf-9ea8-dd843baea345\") " Apr 23 23:17:35.000750 kubelet[2756]: I0423 23:17:35.000683 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27160d2e-7fb2-49bf-9ea8-dd843baea345-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "27160d2e-7fb2-49bf-9ea8-dd843baea345" (UID: "27160d2e-7fb2-49bf-9ea8-dd843baea345"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 23 23:17:35.000849 kubelet[2756]: I0423 23:17:35.000787 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27160d2e-7fb2-49bf-9ea8-dd843baea345-cni-path" (OuterVolumeSpecName: "cni-path") pod "27160d2e-7fb2-49bf-9ea8-dd843baea345" (UID: "27160d2e-7fb2-49bf-9ea8-dd843baea345"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 23 23:17:35.001725 kubelet[2756]: I0423 23:17:35.001599 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27160d2e-7fb2-49bf-9ea8-dd843baea345-hostproc" (OuterVolumeSpecName: "hostproc") pod "27160d2e-7fb2-49bf-9ea8-dd843baea345" (UID: "27160d2e-7fb2-49bf-9ea8-dd843baea345"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 23 23:17:35.001866 kubelet[2756]: I0423 23:17:35.001822 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27160d2e-7fb2-49bf-9ea8-dd843baea345-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "27160d2e-7fb2-49bf-9ea8-dd843baea345" (UID: "27160d2e-7fb2-49bf-9ea8-dd843baea345"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 23 23:17:35.001866 kubelet[2756]: I0423 23:17:35.001783 2756 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/27160d2e-7fb2-49bf-9ea8-dd843baea345-bpf-maps\") pod \"27160d2e-7fb2-49bf-9ea8-dd843baea345\" (UID: \"27160d2e-7fb2-49bf-9ea8-dd843baea345\") " Apr 23 23:17:35.001944 kubelet[2756]: I0423 23:17:35.001892 2756 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/27160d2e-7fb2-49bf-9ea8-dd843baea345-host-proc-sys-net\") pod \"27160d2e-7fb2-49bf-9ea8-dd843baea345\" (UID: \"27160d2e-7fb2-49bf-9ea8-dd843baea345\") " Apr 23 23:17:35.001944 kubelet[2756]: I0423 23:17:35.001921 2756 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/27160d2e-7fb2-49bf-9ea8-dd843baea345-hubble-tls\") pod \"27160d2e-7fb2-49bf-9ea8-dd843baea345\" (UID: \"27160d2e-7fb2-49bf-9ea8-dd843baea345\") " Apr 23 23:17:35.002012 kubelet[2756]: I0423 23:17:35.001948 2756 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/27160d2e-7fb2-49bf-9ea8-dd843baea345-cilium-cgroup\") pod \"27160d2e-7fb2-49bf-9ea8-dd843baea345\" (UID: \"27160d2e-7fb2-49bf-9ea8-dd843baea345\") " Apr 23 23:17:35.002012 kubelet[2756]: I0423 23:17:35.001977 2756 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/27160d2e-7fb2-49bf-9ea8-dd843baea345-cilium-config-path\") pod \"27160d2e-7fb2-49bf-9ea8-dd843baea345\" (UID: \"27160d2e-7fb2-49bf-9ea8-dd843baea345\") " Apr 23 23:17:35.002012 kubelet[2756]: I0423 23:17:35.002007 2756 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/27160d2e-7fb2-49bf-9ea8-dd843baea345-clustermesh-secrets\") pod \"27160d2e-7fb2-49bf-9ea8-dd843baea345\" (UID: \"27160d2e-7fb2-49bf-9ea8-dd843baea345\") " Apr 23 23:17:35.002738 kubelet[2756]: I0423 23:17:35.002482 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27160d2e-7fb2-49bf-9ea8-dd843baea345-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "27160d2e-7fb2-49bf-9ea8-dd843baea345" (UID: "27160d2e-7fb2-49bf-9ea8-dd843baea345"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 23 23:17:35.002738 kubelet[2756]: I0423 23:17:35.002514 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27160d2e-7fb2-49bf-9ea8-dd843baea345-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "27160d2e-7fb2-49bf-9ea8-dd843baea345" (UID: "27160d2e-7fb2-49bf-9ea8-dd843baea345"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 23 23:17:35.005281 kubelet[2756]: I0423 23:17:35.005256 2756 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/27160d2e-7fb2-49bf-9ea8-dd843baea345-host-proc-sys-kernel\") pod \"27160d2e-7fb2-49bf-9ea8-dd843baea345\" (UID: \"27160d2e-7fb2-49bf-9ea8-dd843baea345\") " Apr 23 23:17:35.005430 kubelet[2756]: I0423 23:17:35.005417 2756 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/27160d2e-7fb2-49bf-9ea8-dd843baea345-xtables-lock\") pod \"27160d2e-7fb2-49bf-9ea8-dd843baea345\" (UID: \"27160d2e-7fb2-49bf-9ea8-dd843baea345\") " Apr 23 23:17:35.005511 kubelet[2756]: I0423 23:17:35.005500 2756 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/27160d2e-7fb2-49bf-9ea8-dd843baea345-etc-cni-netd\") pod \"27160d2e-7fb2-49bf-9ea8-dd843baea345\" (UID: \"27160d2e-7fb2-49bf-9ea8-dd843baea345\") " Apr 23 23:17:35.005588 kubelet[2756]: I0423 23:17:35.005577 2756 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e34d5373-becc-4121-908b-e6bf799173fd-cilium-config-path\") pod \"e34d5373-becc-4121-908b-e6bf799173fd\" (UID: \"e34d5373-becc-4121-908b-e6bf799173fd\") " Apr 23 23:17:35.005847 kubelet[2756]: I0423 23:17:35.005819 2756 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/27160d2e-7fb2-49bf-9ea8-dd843baea345-cilium-run\") pod \"27160d2e-7fb2-49bf-9ea8-dd843baea345\" (UID: \"27160d2e-7fb2-49bf-9ea8-dd843baea345\") " Apr 23 23:17:35.005942 kubelet[2756]: I0423 23:17:35.005929 2756 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5xbl\" (UniqueName: \"kubernetes.io/projected/e34d5373-becc-4121-908b-e6bf799173fd-kube-api-access-j5xbl\") pod \"e34d5373-becc-4121-908b-e6bf799173fd\" (UID: \"e34d5373-becc-4121-908b-e6bf799173fd\") " Apr 23 23:17:35.006054 kubelet[2756]: I0423 23:17:35.006043 2756 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/27160d2e-7fb2-49bf-9ea8-dd843baea345-bpf-maps\") on node \"ci-4459-2-4-n-08a122edc2\" DevicePath \"\"" Apr 23 23:17:35.006190 kubelet[2756]: I0423 23:17:35.006078 2756 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/27160d2e-7fb2-49bf-9ea8-dd843baea345-host-proc-sys-net\") on node \"ci-4459-2-4-n-08a122edc2\" DevicePath \"\"" Apr 23 23:17:35.006190 kubelet[2756]: I0423 23:17:35.006093 2756 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/27160d2e-7fb2-49bf-9ea8-dd843baea345-cilium-cgroup\") on node \"ci-4459-2-4-n-08a122edc2\" DevicePath \"\"" Apr 23 23:17:35.006190 kubelet[2756]: I0423 23:17:35.006103 2756 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/27160d2e-7fb2-49bf-9ea8-dd843baea345-cni-path\") on node \"ci-4459-2-4-n-08a122edc2\" DevicePath \"\"" Apr 23 23:17:35.006190 kubelet[2756]: I0423 23:17:35.006111 2756 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/27160d2e-7fb2-49bf-9ea8-dd843baea345-lib-modules\") on node \"ci-4459-2-4-n-08a122edc2\" DevicePath \"\"" Apr 23 23:17:35.006190 kubelet[2756]: I0423 23:17:35.006118 2756 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/27160d2e-7fb2-49bf-9ea8-dd843baea345-hostproc\") on node \"ci-4459-2-4-n-08a122edc2\" DevicePath \"\"" Apr 23 23:17:35.009086 kubelet[2756]: I0423 23:17:35.008782 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27160d2e-7fb2-49bf-9ea8-dd843baea345-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "27160d2e-7fb2-49bf-9ea8-dd843baea345" (UID: "27160d2e-7fb2-49bf-9ea8-dd843baea345"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 23:17:35.009086 kubelet[2756]: I0423 23:17:35.008862 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27160d2e-7fb2-49bf-9ea8-dd843baea345-kube-api-access-8zf4r" (OuterVolumeSpecName: "kube-api-access-8zf4r") pod "27160d2e-7fb2-49bf-9ea8-dd843baea345" (UID: "27160d2e-7fb2-49bf-9ea8-dd843baea345"). InnerVolumeSpecName "kube-api-access-8zf4r". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 23:17:35.009086 kubelet[2756]: I0423 23:17:35.008867 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27160d2e-7fb2-49bf-9ea8-dd843baea345-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "27160d2e-7fb2-49bf-9ea8-dd843baea345" (UID: "27160d2e-7fb2-49bf-9ea8-dd843baea345"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 23:17:35.009086 kubelet[2756]: I0423 23:17:35.008888 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27160d2e-7fb2-49bf-9ea8-dd843baea345-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "27160d2e-7fb2-49bf-9ea8-dd843baea345" (UID: "27160d2e-7fb2-49bf-9ea8-dd843baea345"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 23 23:17:35.009086 kubelet[2756]: I0423 23:17:35.008917 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27160d2e-7fb2-49bf-9ea8-dd843baea345-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "27160d2e-7fb2-49bf-9ea8-dd843baea345" (UID: "27160d2e-7fb2-49bf-9ea8-dd843baea345"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 23 23:17:35.009857 kubelet[2756]: I0423 23:17:35.009835 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27160d2e-7fb2-49bf-9ea8-dd843baea345-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "27160d2e-7fb2-49bf-9ea8-dd843baea345" (UID: "27160d2e-7fb2-49bf-9ea8-dd843baea345"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 23 23:17:35.010004 kubelet[2756]: I0423 23:17:35.009972 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27160d2e-7fb2-49bf-9ea8-dd843baea345-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "27160d2e-7fb2-49bf-9ea8-dd843baea345" (UID: "27160d2e-7fb2-49bf-9ea8-dd843baea345"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 23 23:17:35.010110 kubelet[2756]: I0423 23:17:35.010091 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e34d5373-becc-4121-908b-e6bf799173fd-kube-api-access-j5xbl" (OuterVolumeSpecName: "kube-api-access-j5xbl") pod "e34d5373-becc-4121-908b-e6bf799173fd" (UID: "e34d5373-becc-4121-908b-e6bf799173fd"). InnerVolumeSpecName "kube-api-access-j5xbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 23:17:35.012692 kubelet[2756]: I0423 23:17:35.012654 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e34d5373-becc-4121-908b-e6bf799173fd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e34d5373-becc-4121-908b-e6bf799173fd" (UID: "e34d5373-becc-4121-908b-e6bf799173fd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 23:17:35.013485 kubelet[2756]: I0423 23:17:35.013453 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27160d2e-7fb2-49bf-9ea8-dd843baea345-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "27160d2e-7fb2-49bf-9ea8-dd843baea345" (UID: "27160d2e-7fb2-49bf-9ea8-dd843baea345"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 23:17:35.106487 kubelet[2756]: I0423 23:17:35.106427 2756 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8zf4r\" (UniqueName: \"kubernetes.io/projected/27160d2e-7fb2-49bf-9ea8-dd843baea345-kube-api-access-8zf4r\") on node \"ci-4459-2-4-n-08a122edc2\" DevicePath \"\"" Apr 23 23:17:35.106487 kubelet[2756]: I0423 23:17:35.106485 2756 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/27160d2e-7fb2-49bf-9ea8-dd843baea345-hubble-tls\") on node \"ci-4459-2-4-n-08a122edc2\" DevicePath \"\"" Apr 23 23:17:35.106487 kubelet[2756]: I0423 23:17:35.106504 2756 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/27160d2e-7fb2-49bf-9ea8-dd843baea345-cilium-config-path\") on node \"ci-4459-2-4-n-08a122edc2\" DevicePath \"\"" Apr 23 23:17:35.106847 kubelet[2756]: I0423 23:17:35.106521 2756 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/27160d2e-7fb2-49bf-9ea8-dd843baea345-clustermesh-secrets\") on node \"ci-4459-2-4-n-08a122edc2\" DevicePath \"\"" Apr 23 23:17:35.106847 kubelet[2756]: I0423 23:17:35.106539 2756 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/27160d2e-7fb2-49bf-9ea8-dd843baea345-host-proc-sys-kernel\") on node \"ci-4459-2-4-n-08a122edc2\" DevicePath \"\"" Apr 23 23:17:35.106847 kubelet[2756]: I0423 23:17:35.106578 2756 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/27160d2e-7fb2-49bf-9ea8-dd843baea345-xtables-lock\") on node \"ci-4459-2-4-n-08a122edc2\" DevicePath \"\"" Apr 23 23:17:35.106847 kubelet[2756]: I0423 23:17:35.106596 2756 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/27160d2e-7fb2-49bf-9ea8-dd843baea345-etc-cni-netd\") on node \"ci-4459-2-4-n-08a122edc2\" DevicePath \"\"" Apr 23 23:17:35.106847 kubelet[2756]: I0423 23:17:35.106612 2756 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e34d5373-becc-4121-908b-e6bf799173fd-cilium-config-path\") on node \"ci-4459-2-4-n-08a122edc2\" DevicePath \"\"" Apr 23 23:17:35.106847 kubelet[2756]: I0423 23:17:35.106632 2756 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/27160d2e-7fb2-49bf-9ea8-dd843baea345-cilium-run\") on node \"ci-4459-2-4-n-08a122edc2\" DevicePath \"\"" Apr 23 23:17:35.106847 kubelet[2756]: I0423 23:17:35.106650 2756 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j5xbl\" (UniqueName: \"kubernetes.io/projected/e34d5373-becc-4121-908b-e6bf799173fd-kube-api-access-j5xbl\") on node \"ci-4459-2-4-n-08a122edc2\" DevicePath \"\"" Apr 23 23:17:35.501853 kubelet[2756]: I0423 23:17:35.501819 2756 scope.go:117] "RemoveContainer" containerID="2e78a1f9ff90ad8da2af3d5fd9587f96b1a5ca0d77333a6760eb54aae9e5b009" Apr 23 23:17:35.508988 containerd[1511]: time="2026-04-23T23:17:35.508629385Z" level=info msg="RemoveContainer for \"2e78a1f9ff90ad8da2af3d5fd9587f96b1a5ca0d77333a6760eb54aae9e5b009\"" Apr 23 23:17:35.510773 systemd[1]: Removed slice kubepods-besteffort-pode34d5373_becc_4121_908b_e6bf799173fd.slice - libcontainer container kubepods-besteffort-pode34d5373_becc_4121_908b_e6bf799173fd.slice. Apr 23 23:17:35.520330 containerd[1511]: time="2026-04-23T23:17:35.520283120Z" level=info msg="RemoveContainer for \"2e78a1f9ff90ad8da2af3d5fd9587f96b1a5ca0d77333a6760eb54aae9e5b009\" returns successfully" Apr 23 23:17:35.520877 kubelet[2756]: I0423 23:17:35.520834 2756 scope.go:117] "RemoveContainer" containerID="2e78a1f9ff90ad8da2af3d5fd9587f96b1a5ca0d77333a6760eb54aae9e5b009" Apr 23 23:17:35.521455 containerd[1511]: time="2026-04-23T23:17:35.521397169Z" level=error msg="ContainerStatus for \"2e78a1f9ff90ad8da2af3d5fd9587f96b1a5ca0d77333a6760eb54aae9e5b009\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2e78a1f9ff90ad8da2af3d5fd9587f96b1a5ca0d77333a6760eb54aae9e5b009\": not found" Apr 23 23:17:35.522052 kubelet[2756]: E0423 23:17:35.521803 2756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2e78a1f9ff90ad8da2af3d5fd9587f96b1a5ca0d77333a6760eb54aae9e5b009\": not found" containerID="2e78a1f9ff90ad8da2af3d5fd9587f96b1a5ca0d77333a6760eb54aae9e5b009" Apr 23 23:17:35.522052 kubelet[2756]: I0423 23:17:35.521925 2756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2e78a1f9ff90ad8da2af3d5fd9587f96b1a5ca0d77333a6760eb54aae9e5b009"} err="failed to get container status \"2e78a1f9ff90ad8da2af3d5fd9587f96b1a5ca0d77333a6760eb54aae9e5b009\": rpc error: code = NotFound desc = an error occurred when try to find container \"2e78a1f9ff90ad8da2af3d5fd9587f96b1a5ca0d77333a6760eb54aae9e5b009\": not found" Apr 23 23:17:35.525272 kubelet[2756]: I0423 23:17:35.525247 2756 scope.go:117] "RemoveContainer" containerID="a3935000aed77b61c3f9f2c586fc1e9d8b1c36126dc0b4de22cc09d034aabb2c" Apr 23 23:17:35.530801 containerd[1511]: time="2026-04-23T23:17:35.530540084Z" level=info msg="RemoveContainer for \"a3935000aed77b61c3f9f2c586fc1e9d8b1c36126dc0b4de22cc09d034aabb2c\"" Apr 23 23:17:35.537432 systemd[1]: Removed slice kubepods-burstable-pod27160d2e_7fb2_49bf_9ea8_dd843baea345.slice - libcontainer container kubepods-burstable-pod27160d2e_7fb2_49bf_9ea8_dd843baea345.slice. Apr 23 23:17:35.537649 systemd[1]: kubepods-burstable-pod27160d2e_7fb2_49bf_9ea8_dd843baea345.slice: Consumed 7.176s CPU time, 127.4M memory peak, 128K read from disk, 12.9M written to disk. Apr 23 23:17:35.540055 containerd[1511]: time="2026-04-23T23:17:35.540018562Z" level=info msg="RemoveContainer for \"a3935000aed77b61c3f9f2c586fc1e9d8b1c36126dc0b4de22cc09d034aabb2c\" returns successfully" Apr 23 23:17:35.540941 kubelet[2756]: I0423 23:17:35.540920 2756 scope.go:117] "RemoveContainer" containerID="00bc0fab3f6b3a7a94421eccfb745f7a5071b9d98f408a31628805d4a913d097" Apr 23 23:17:35.543602 containerd[1511]: time="2026-04-23T23:17:35.543397789Z" level=info msg="RemoveContainer for \"00bc0fab3f6b3a7a94421eccfb745f7a5071b9d98f408a31628805d4a913d097\"" Apr 23 23:17:35.549527 containerd[1511]: time="2026-04-23T23:17:35.549485199Z" level=info msg="RemoveContainer for \"00bc0fab3f6b3a7a94421eccfb745f7a5071b9d98f408a31628805d4a913d097\" returns successfully" Apr 23 23:17:35.549821 kubelet[2756]: I0423 23:17:35.549790 2756 scope.go:117] "RemoveContainer" containerID="b94902ca617b14d7cdb0a9667970b21f001deb001ea82168fa3a92fd69ca94c2" Apr 23 23:17:35.553973 containerd[1511]: time="2026-04-23T23:17:35.553587753Z" level=info msg="RemoveContainer for \"b94902ca617b14d7cdb0a9667970b21f001deb001ea82168fa3a92fd69ca94c2\"" Apr 23 23:17:35.560452 containerd[1511]: time="2026-04-23T23:17:35.560376608Z" level=info msg="RemoveContainer for \"b94902ca617b14d7cdb0a9667970b21f001deb001ea82168fa3a92fd69ca94c2\" returns successfully" Apr 23 23:17:35.560960 kubelet[2756]: I0423 23:17:35.560846 2756 scope.go:117] "RemoveContainer" containerID="8eff615b5b15cb6ac861b7fbc612308604acdb7172371af6ec54419c8cbe7a25" Apr 23 23:17:35.563999 containerd[1511]: time="2026-04-23T23:17:35.563969158Z" level=info msg="RemoveContainer for \"8eff615b5b15cb6ac861b7fbc612308604acdb7172371af6ec54419c8cbe7a25\"" Apr 23 23:17:35.568048 containerd[1511]: time="2026-04-23T23:17:35.567974150Z" level=info msg="RemoveContainer for \"8eff615b5b15cb6ac861b7fbc612308604acdb7172371af6ec54419c8cbe7a25\" returns successfully" Apr 23 23:17:35.568373 kubelet[2756]: I0423 23:17:35.568323 2756 scope.go:117] "RemoveContainer" containerID="d0501e0256930d6030924a03380fcc919a0d27df5ab59d4d8e43f12c92977571" Apr 23 23:17:35.570260 containerd[1511]: time="2026-04-23T23:17:35.570192929Z" level=info msg="RemoveContainer for \"d0501e0256930d6030924a03380fcc919a0d27df5ab59d4d8e43f12c92977571\"" Apr 23 23:17:35.574497 containerd[1511]: time="2026-04-23T23:17:35.573904279Z" level=info msg="RemoveContainer for \"d0501e0256930d6030924a03380fcc919a0d27df5ab59d4d8e43f12c92977571\" returns successfully" Apr 23 23:17:35.574580 kubelet[2756]: I0423 23:17:35.574138 2756 scope.go:117] "RemoveContainer" containerID="a3935000aed77b61c3f9f2c586fc1e9d8b1c36126dc0b4de22cc09d034aabb2c" Apr 23 23:17:35.574616 containerd[1511]: time="2026-04-23T23:17:35.574520724Z" level=error msg="ContainerStatus for \"a3935000aed77b61c3f9f2c586fc1e9d8b1c36126dc0b4de22cc09d034aabb2c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a3935000aed77b61c3f9f2c586fc1e9d8b1c36126dc0b4de22cc09d034aabb2c\": not found" Apr 23 23:17:35.574895 kubelet[2756]: E0423 23:17:35.574788 2756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a3935000aed77b61c3f9f2c586fc1e9d8b1c36126dc0b4de22cc09d034aabb2c\": not found" containerID="a3935000aed77b61c3f9f2c586fc1e9d8b1c36126dc0b4de22cc09d034aabb2c" Apr 23 23:17:35.574895 kubelet[2756]: I0423 23:17:35.574837 2756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a3935000aed77b61c3f9f2c586fc1e9d8b1c36126dc0b4de22cc09d034aabb2c"} err="failed to get container status \"a3935000aed77b61c3f9f2c586fc1e9d8b1c36126dc0b4de22cc09d034aabb2c\": rpc error: code = NotFound desc = an error occurred when try to find container \"a3935000aed77b61c3f9f2c586fc1e9d8b1c36126dc0b4de22cc09d034aabb2c\": not found" Apr 23 23:17:35.574895 kubelet[2756]: I0423 23:17:35.574856 2756 scope.go:117] "RemoveContainer" containerID="00bc0fab3f6b3a7a94421eccfb745f7a5071b9d98f408a31628805d4a913d097" Apr 23 23:17:35.575492 containerd[1511]: time="2026-04-23T23:17:35.575452172Z" level=error msg="ContainerStatus for \"00bc0fab3f6b3a7a94421eccfb745f7a5071b9d98f408a31628805d4a913d097\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"00bc0fab3f6b3a7a94421eccfb745f7a5071b9d98f408a31628805d4a913d097\": not found" Apr 23 23:17:35.575813 kubelet[2756]: E0423 23:17:35.575672 2756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"00bc0fab3f6b3a7a94421eccfb745f7a5071b9d98f408a31628805d4a913d097\": not found" containerID="00bc0fab3f6b3a7a94421eccfb745f7a5071b9d98f408a31628805d4a913d097" Apr 23 23:17:35.575813 kubelet[2756]: I0423 23:17:35.575697 2756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"00bc0fab3f6b3a7a94421eccfb745f7a5071b9d98f408a31628805d4a913d097"} err="failed to get container status \"00bc0fab3f6b3a7a94421eccfb745f7a5071b9d98f408a31628805d4a913d097\": rpc error: code = NotFound desc = an error occurred when try to find container \"00bc0fab3f6b3a7a94421eccfb745f7a5071b9d98f408a31628805d4a913d097\": not found" Apr 23 23:17:35.575813 kubelet[2756]: I0423 23:17:35.575733 2756 scope.go:117] "RemoveContainer" containerID="b94902ca617b14d7cdb0a9667970b21f001deb001ea82168fa3a92fd69ca94c2" Apr 23 23:17:35.576115 containerd[1511]: time="2026-04-23T23:17:35.576079657Z" level=error msg="ContainerStatus for \"b94902ca617b14d7cdb0a9667970b21f001deb001ea82168fa3a92fd69ca94c2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b94902ca617b14d7cdb0a9667970b21f001deb001ea82168fa3a92fd69ca94c2\": not found" Apr 23 23:17:35.576346 kubelet[2756]: E0423 23:17:35.576316 2756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b94902ca617b14d7cdb0a9667970b21f001deb001ea82168fa3a92fd69ca94c2\": not found" containerID="b94902ca617b14d7cdb0a9667970b21f001deb001ea82168fa3a92fd69ca94c2" Apr 23 23:17:35.576390 kubelet[2756]: I0423 23:17:35.576359 2756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b94902ca617b14d7cdb0a9667970b21f001deb001ea82168fa3a92fd69ca94c2"} err="failed to get container status \"b94902ca617b14d7cdb0a9667970b21f001deb001ea82168fa3a92fd69ca94c2\": rpc error: code = NotFound desc = an error occurred when try to find container \"b94902ca617b14d7cdb0a9667970b21f001deb001ea82168fa3a92fd69ca94c2\": not found" Apr 23 23:17:35.576416 kubelet[2756]: I0423 23:17:35.576387 2756 scope.go:117] "RemoveContainer" containerID="8eff615b5b15cb6ac861b7fbc612308604acdb7172371af6ec54419c8cbe7a25" Apr 23 23:17:35.576723 containerd[1511]: time="2026-04-23T23:17:35.576649621Z" level=error msg="ContainerStatus for \"8eff615b5b15cb6ac861b7fbc612308604acdb7172371af6ec54419c8cbe7a25\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8eff615b5b15cb6ac861b7fbc612308604acdb7172371af6ec54419c8cbe7a25\": not found" Apr 23 23:17:35.576871 kubelet[2756]: E0423 23:17:35.576851 2756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8eff615b5b15cb6ac861b7fbc612308604acdb7172371af6ec54419c8cbe7a25\": not found" containerID="8eff615b5b15cb6ac861b7fbc612308604acdb7172371af6ec54419c8cbe7a25" Apr 23 23:17:35.577075 kubelet[2756]: I0423 23:17:35.576976 2756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8eff615b5b15cb6ac861b7fbc612308604acdb7172371af6ec54419c8cbe7a25"} err="failed to get container status \"8eff615b5b15cb6ac861b7fbc612308604acdb7172371af6ec54419c8cbe7a25\": rpc error: code = NotFound desc = an error occurred when try to find container \"8eff615b5b15cb6ac861b7fbc612308604acdb7172371af6ec54419c8cbe7a25\": not found" Apr 23 23:17:35.577075 kubelet[2756]: I0423 23:17:35.576994 2756 scope.go:117] "RemoveContainer" containerID="d0501e0256930d6030924a03380fcc919a0d27df5ab59d4d8e43f12c92977571" Apr 23 23:17:35.577358 containerd[1511]: time="2026-04-23T23:17:35.577328627Z" level=error msg="ContainerStatus for \"d0501e0256930d6030924a03380fcc919a0d27df5ab59d4d8e43f12c92977571\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d0501e0256930d6030924a03380fcc919a0d27df5ab59d4d8e43f12c92977571\": not found" Apr 23 23:17:35.577522 kubelet[2756]: E0423 23:17:35.577501 2756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d0501e0256930d6030924a03380fcc919a0d27df5ab59d4d8e43f12c92977571\": not found" containerID="d0501e0256930d6030924a03380fcc919a0d27df5ab59d4d8e43f12c92977571" Apr 23 23:17:35.577557 kubelet[2756]: I0423 23:17:35.577527 2756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d0501e0256930d6030924a03380fcc919a0d27df5ab59d4d8e43f12c92977571"} err="failed to get container status \"d0501e0256930d6030924a03380fcc919a0d27df5ab59d4d8e43f12c92977571\": rpc error: code = NotFound desc = an error occurred when try to find container \"d0501e0256930d6030924a03380fcc919a0d27df5ab59d4d8e43f12c92977571\": not found" Apr 23 23:17:35.788402 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4d87aba632fe564e490fa6e648b734efe7e65263436d6e2bbacd0a63393a80d1-shm.mount: Deactivated successfully. Apr 23 23:17:35.788589 systemd[1]: var-lib-kubelet-pods-e34d5373\x2dbecc\x2d4121\x2d908b\x2de6bf799173fd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj5xbl.mount: Deactivated successfully. Apr 23 23:17:35.789741 systemd[1]: var-lib-kubelet-pods-27160d2e\x2d7fb2\x2d49bf\x2d9ea8\x2ddd843baea345-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8zf4r.mount: Deactivated successfully. Apr 23 23:17:35.789911 systemd[1]: var-lib-kubelet-pods-27160d2e\x2d7fb2\x2d49bf\x2d9ea8\x2ddd843baea345-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 23 23:17:35.790040 systemd[1]: var-lib-kubelet-pods-27160d2e\x2d7fb2\x2d49bf\x2d9ea8\x2ddd843baea345-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 23 23:17:35.888744 kubelet[2756]: I0423 23:17:35.887691 2756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27160d2e-7fb2-49bf-9ea8-dd843baea345" path="/var/lib/kubelet/pods/27160d2e-7fb2-49bf-9ea8-dd843baea345/volumes" Apr 23 23:17:35.889046 kubelet[2756]: I0423 23:17:35.889014 2756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e34d5373-becc-4121-908b-e6bf799173fd" path="/var/lib/kubelet/pods/e34d5373-becc-4121-908b-e6bf799173fd/volumes" Apr 23 23:17:35.921353 containerd[1511]: time="2026-04-23T23:17:35.921296042Z" level=info msg="StopPodSandbox for \"4d87aba632fe564e490fa6e648b734efe7e65263436d6e2bbacd0a63393a80d1\"" Apr 23 23:17:35.921769 containerd[1511]: time="2026-04-23T23:17:35.921479164Z" level=info msg="TearDown network for sandbox \"4d87aba632fe564e490fa6e648b734efe7e65263436d6e2bbacd0a63393a80d1\" successfully" Apr 23 23:17:35.921769 containerd[1511]: time="2026-04-23T23:17:35.921501364Z" level=info msg="StopPodSandbox for \"4d87aba632fe564e490fa6e648b734efe7e65263436d6e2bbacd0a63393a80d1\" returns successfully" Apr 23 23:17:35.922587 containerd[1511]: time="2026-04-23T23:17:35.922465212Z" level=info msg="RemovePodSandbox for \"4d87aba632fe564e490fa6e648b734efe7e65263436d6e2bbacd0a63393a80d1\"" Apr 23 23:17:35.922752 containerd[1511]: time="2026-04-23T23:17:35.922698014Z" level=info msg="Forcibly stopping sandbox \"4d87aba632fe564e490fa6e648b734efe7e65263436d6e2bbacd0a63393a80d1\"" Apr 23 23:17:35.922973 containerd[1511]: time="2026-04-23T23:17:35.922911255Z" level=info msg="TearDown network for sandbox \"4d87aba632fe564e490fa6e648b734efe7e65263436d6e2bbacd0a63393a80d1\" successfully" Apr 23 23:17:35.924179 containerd[1511]: time="2026-04-23T23:17:35.924123585Z" level=info msg="Ensure that sandbox 4d87aba632fe564e490fa6e648b734efe7e65263436d6e2bbacd0a63393a80d1 in task-service has been cleanup successfully" Apr 23 23:17:35.928160 containerd[1511]: time="2026-04-23T23:17:35.927779015Z" level=info msg="RemovePodSandbox \"4d87aba632fe564e490fa6e648b734efe7e65263436d6e2bbacd0a63393a80d1\" returns successfully" Apr 23 23:17:35.931100 containerd[1511]: time="2026-04-23T23:17:35.931065842Z" level=info msg="StopPodSandbox for \"23890f37c491a649611c3131fd0b62333d0f1b4e6a6e2c3876b6bcc088fc4c74\"" Apr 23 23:17:35.931243 containerd[1511]: time="2026-04-23T23:17:35.931187643Z" level=info msg="TearDown network for sandbox \"23890f37c491a649611c3131fd0b62333d0f1b4e6a6e2c3876b6bcc088fc4c74\" successfully" Apr 23 23:17:35.931243 containerd[1511]: time="2026-04-23T23:17:35.931202603Z" level=info msg="StopPodSandbox for \"23890f37c491a649611c3131fd0b62333d0f1b4e6a6e2c3876b6bcc088fc4c74\" returns successfully" Apr 23 23:17:35.931969 containerd[1511]: time="2026-04-23T23:17:35.931925169Z" level=info msg="RemovePodSandbox for \"23890f37c491a649611c3131fd0b62333d0f1b4e6a6e2c3876b6bcc088fc4c74\"" Apr 23 23:17:35.932362 containerd[1511]: time="2026-04-23T23:17:35.932304932Z" level=info msg="Forcibly stopping sandbox \"23890f37c491a649611c3131fd0b62333d0f1b4e6a6e2c3876b6bcc088fc4c74\"" Apr 23 23:17:35.932812 containerd[1511]: time="2026-04-23T23:17:35.932685375Z" level=info msg="TearDown network for sandbox \"23890f37c491a649611c3131fd0b62333d0f1b4e6a6e2c3876b6bcc088fc4c74\" successfully" Apr 23 23:17:35.935915 containerd[1511]: time="2026-04-23T23:17:35.935793401Z" level=info msg="Ensure that sandbox 23890f37c491a649611c3131fd0b62333d0f1b4e6a6e2c3876b6bcc088fc4c74 in task-service has been cleanup successfully" Apr 23 23:17:35.939837 containerd[1511]: time="2026-04-23T23:17:35.939735953Z" level=info msg="RemovePodSandbox \"23890f37c491a649611c3131fd0b62333d0f1b4e6a6e2c3876b6bcc088fc4c74\" returns successfully" Apr 23 23:17:36.021627 kubelet[2756]: E0423 23:17:36.021551 2756 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 23:17:36.626631 sshd[4323]: Connection closed by 50.85.169.122 port 58834 Apr 23 23:17:36.627845 sshd-session[4320]: pam_unix(sshd:session): session closed for user core Apr 23 23:17:36.634723 systemd-logind[1483]: Session 23 logged out. Waiting for processes to exit. Apr 23 23:17:36.634858 systemd[1]: sshd@22-138.199.150.149:22-50.85.169.122:58834.service: Deactivated successfully. Apr 23 23:17:36.637474 systemd[1]: session-23.scope: Deactivated successfully. Apr 23 23:17:36.638195 systemd[1]: session-23.scope: Consumed 1.101s CPU time, 23.6M memory peak. Apr 23 23:17:36.640919 systemd-logind[1483]: Removed session 23. Apr 23 23:17:36.652038 systemd[1]: Started sshd@23-138.199.150.149:22-50.85.169.122:58850.service - OpenSSH per-connection server daemon (50.85.169.122:58850). Apr 23 23:17:36.781436 sshd[4468]: Accepted publickey for core from 50.85.169.122 port 58850 ssh2: RSA SHA256:Tz0dqMPsdf8xUb4jUaTJqqr7RT+Ihh1eVJlUIJQ/qIM Apr 23 23:17:36.783539 sshd-session[4468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:17:36.788643 systemd-logind[1483]: New session 24 of user core. Apr 23 23:17:36.793920 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 23 23:17:37.856006 sshd[4471]: Connection closed by 50.85.169.122 port 58850 Apr 23 23:17:37.859086 sshd-session[4468]: pam_unix(sshd:session): session closed for user core Apr 23 23:17:37.864811 systemd-logind[1483]: Session 24 logged out. Waiting for processes to exit. Apr 23 23:17:37.865444 systemd[1]: sshd@23-138.199.150.149:22-50.85.169.122:58850.service: Deactivated successfully. Apr 23 23:17:37.869534 systemd[1]: session-24.scope: Deactivated successfully. Apr 23 23:17:37.883185 systemd-logind[1483]: Removed session 24. Apr 23 23:17:37.885956 systemd[1]: Started sshd@24-138.199.150.149:22-50.85.169.122:58858.service - OpenSSH per-connection server daemon (50.85.169.122:58858). Apr 23 23:17:37.911929 systemd[1]: Created slice kubepods-burstable-pod9155f185_00e1_494e_b853_90f27944f689.slice - libcontainer container kubepods-burstable-pod9155f185_00e1_494e_b853_90f27944f689.slice. Apr 23 23:17:38.024126 kubelet[2756]: I0423 23:17:38.024081 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9155f185-00e1-494e-b853-90f27944f689-cni-path\") pod \"cilium-tk5w9\" (UID: \"9155f185-00e1-494e-b853-90f27944f689\") " pod="kube-system/cilium-tk5w9" Apr 23 23:17:38.024665 kubelet[2756]: I0423 23:17:38.024608 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9155f185-00e1-494e-b853-90f27944f689-clustermesh-secrets\") pod \"cilium-tk5w9\" (UID: \"9155f185-00e1-494e-b853-90f27944f689\") " pod="kube-system/cilium-tk5w9" Apr 23 23:17:38.024858 kubelet[2756]: I0423 23:17:38.024843 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9155f185-00e1-494e-b853-90f27944f689-cilium-config-path\") pod \"cilium-tk5w9\" (UID: \"9155f185-00e1-494e-b853-90f27944f689\") " pod="kube-system/cilium-tk5w9" Apr 23 23:17:38.026814 kubelet[2756]: I0423 23:17:38.026791 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9155f185-00e1-494e-b853-90f27944f689-host-proc-sys-kernel\") pod \"cilium-tk5w9\" (UID: \"9155f185-00e1-494e-b853-90f27944f689\") " pod="kube-system/cilium-tk5w9" Apr 23 23:17:38.026966 kubelet[2756]: I0423 23:17:38.026954 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9155f185-00e1-494e-b853-90f27944f689-etc-cni-netd\") pod \"cilium-tk5w9\" (UID: \"9155f185-00e1-494e-b853-90f27944f689\") " pod="kube-system/cilium-tk5w9" Apr 23 23:17:38.027070 kubelet[2756]: I0423 23:17:38.027058 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9155f185-00e1-494e-b853-90f27944f689-lib-modules\") pod \"cilium-tk5w9\" (UID: \"9155f185-00e1-494e-b853-90f27944f689\") " pod="kube-system/cilium-tk5w9" Apr 23 23:17:38.027175 kubelet[2756]: I0423 23:17:38.027163 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9155f185-00e1-494e-b853-90f27944f689-host-proc-sys-net\") pod \"cilium-tk5w9\" (UID: \"9155f185-00e1-494e-b853-90f27944f689\") " pod="kube-system/cilium-tk5w9" Apr 23 23:17:38.027299 kubelet[2756]: I0423 23:17:38.027285 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2s8r\" (UniqueName: \"kubernetes.io/projected/9155f185-00e1-494e-b853-90f27944f689-kube-api-access-d2s8r\") pod \"cilium-tk5w9\" (UID: \"9155f185-00e1-494e-b853-90f27944f689\") " pod="kube-system/cilium-tk5w9" Apr 23 23:17:38.027415 kubelet[2756]: I0423 23:17:38.027403 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9155f185-00e1-494e-b853-90f27944f689-cilium-run\") pod \"cilium-tk5w9\" (UID: \"9155f185-00e1-494e-b853-90f27944f689\") " pod="kube-system/cilium-tk5w9" Apr 23 23:17:38.027506 kubelet[2756]: I0423 23:17:38.027496 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9155f185-00e1-494e-b853-90f27944f689-bpf-maps\") pod \"cilium-tk5w9\" (UID: \"9155f185-00e1-494e-b853-90f27944f689\") " pod="kube-system/cilium-tk5w9" Apr 23 23:17:38.027634 kubelet[2756]: I0423 23:17:38.027581 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9155f185-00e1-494e-b853-90f27944f689-hostproc\") pod \"cilium-tk5w9\" (UID: \"9155f185-00e1-494e-b853-90f27944f689\") " pod="kube-system/cilium-tk5w9" Apr 23 23:17:38.027634 kubelet[2756]: I0423 23:17:38.027599 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9155f185-00e1-494e-b853-90f27944f689-cilium-cgroup\") pod \"cilium-tk5w9\" (UID: \"9155f185-00e1-494e-b853-90f27944f689\") " pod="kube-system/cilium-tk5w9" Apr 23 23:17:38.027738 kubelet[2756]: I0423 23:17:38.027720 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9155f185-00e1-494e-b853-90f27944f689-xtables-lock\") pod \"cilium-tk5w9\" (UID: \"9155f185-00e1-494e-b853-90f27944f689\") " pod="kube-system/cilium-tk5w9" Apr 23 23:17:38.027842 kubelet[2756]: I0423 23:17:38.027829 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9155f185-00e1-494e-b853-90f27944f689-cilium-ipsec-secrets\") pod \"cilium-tk5w9\" (UID: \"9155f185-00e1-494e-b853-90f27944f689\") " pod="kube-system/cilium-tk5w9" Apr 23 23:17:38.027968 kubelet[2756]: I0423 23:17:38.027956 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9155f185-00e1-494e-b853-90f27944f689-hubble-tls\") pod \"cilium-tk5w9\" (UID: \"9155f185-00e1-494e-b853-90f27944f689\") " pod="kube-system/cilium-tk5w9" Apr 23 23:17:38.038053 sshd[4482]: Accepted publickey for core from 50.85.169.122 port 58858 ssh2: RSA SHA256:Tz0dqMPsdf8xUb4jUaTJqqr7RT+Ihh1eVJlUIJQ/qIM Apr 23 23:17:38.040319 sshd-session[4482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:17:38.046930 systemd-logind[1483]: New session 25 of user core. Apr 23 23:17:38.050941 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 23 23:17:38.089751 sshd[4485]: Connection closed by 50.85.169.122 port 58858 Apr 23 23:17:38.090539 sshd-session[4482]: pam_unix(sshd:session): session closed for user core Apr 23 23:17:38.096262 systemd[1]: sshd@24-138.199.150.149:22-50.85.169.122:58858.service: Deactivated successfully. Apr 23 23:17:38.100594 systemd[1]: session-25.scope: Deactivated successfully. Apr 23 23:17:38.103330 systemd-logind[1483]: Session 25 logged out. Waiting for processes to exit. Apr 23 23:17:38.105590 systemd-logind[1483]: Removed session 25. Apr 23 23:17:38.117998 systemd[1]: Started sshd@25-138.199.150.149:22-50.85.169.122:58866.service - OpenSSH per-connection server daemon (50.85.169.122:58866). Apr 23 23:17:38.221392 containerd[1511]: time="2026-04-23T23:17:38.221314207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tk5w9,Uid:9155f185-00e1-494e-b853-90f27944f689,Namespace:kube-system,Attempt:0,}" Apr 23 23:17:38.239335 containerd[1511]: time="2026-04-23T23:17:38.239288554Z" level=info msg="connecting to shim 0c7b79c69ade6ea9518686a5341247ff819bcb234805ee5f0663597e389a24a9" address="unix:///run/containerd/s/5faaff7a1c7bbdd9db8078e8ac2d99ceaeb7fc32c9b3ba1423841aae85e9a4d1" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:17:38.265737 sshd[4492]: Accepted publickey for core from 50.85.169.122 port 58866 ssh2: RSA SHA256:Tz0dqMPsdf8xUb4jUaTJqqr7RT+Ihh1eVJlUIJQ/qIM Apr 23 23:17:38.267654 sshd-session[4492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:17:38.267967 systemd[1]: Started cri-containerd-0c7b79c69ade6ea9518686a5341247ff819bcb234805ee5f0663597e389a24a9.scope - libcontainer container 0c7b79c69ade6ea9518686a5341247ff819bcb234805ee5f0663597e389a24a9. Apr 23 23:17:38.277675 systemd-logind[1483]: New session 26 of user core. Apr 23 23:17:38.280867 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 23 23:17:38.309976 containerd[1511]: time="2026-04-23T23:17:38.309866930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tk5w9,Uid:9155f185-00e1-494e-b853-90f27944f689,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c7b79c69ade6ea9518686a5341247ff819bcb234805ee5f0663597e389a24a9\"" Apr 23 23:17:38.319830 containerd[1511]: time="2026-04-23T23:17:38.318880684Z" level=info msg="CreateContainer within sandbox \"0c7b79c69ade6ea9518686a5341247ff819bcb234805ee5f0663597e389a24a9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 23 23:17:38.335120 containerd[1511]: time="2026-04-23T23:17:38.335048096Z" level=info msg="Container 3780a2f404407a11dd70cba17a2c605ee102fbfd48567dfdcf23d4329a9f1f11: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:17:38.350219 containerd[1511]: time="2026-04-23T23:17:38.350145859Z" level=info msg="CreateContainer within sandbox \"0c7b79c69ade6ea9518686a5341247ff819bcb234805ee5f0663597e389a24a9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3780a2f404407a11dd70cba17a2c605ee102fbfd48567dfdcf23d4329a9f1f11\"" Apr 23 23:17:38.353834 containerd[1511]: time="2026-04-23T23:17:38.351867554Z" level=info msg="StartContainer for \"3780a2f404407a11dd70cba17a2c605ee102fbfd48567dfdcf23d4329a9f1f11\"" Apr 23 23:17:38.354186 containerd[1511]: time="2026-04-23T23:17:38.353983451Z" level=info msg="connecting to shim 3780a2f404407a11dd70cba17a2c605ee102fbfd48567dfdcf23d4329a9f1f11" address="unix:///run/containerd/s/5faaff7a1c7bbdd9db8078e8ac2d99ceaeb7fc32c9b3ba1423841aae85e9a4d1" protocol=ttrpc version=3 Apr 23 23:17:38.383920 systemd[1]: Started cri-containerd-3780a2f404407a11dd70cba17a2c605ee102fbfd48567dfdcf23d4329a9f1f11.scope - libcontainer container 3780a2f404407a11dd70cba17a2c605ee102fbfd48567dfdcf23d4329a9f1f11. Apr 23 23:17:38.433023 containerd[1511]: time="2026-04-23T23:17:38.432891935Z" level=info msg="StartContainer for \"3780a2f404407a11dd70cba17a2c605ee102fbfd48567dfdcf23d4329a9f1f11\" returns successfully" Apr 23 23:17:38.445910 systemd[1]: cri-containerd-3780a2f404407a11dd70cba17a2c605ee102fbfd48567dfdcf23d4329a9f1f11.scope: Deactivated successfully. Apr 23 23:17:38.451053 containerd[1511]: time="2026-04-23T23:17:38.451003803Z" level=info msg="received container exit event container_id:\"3780a2f404407a11dd70cba17a2c605ee102fbfd48567dfdcf23d4329a9f1f11\" id:\"3780a2f404407a11dd70cba17a2c605ee102fbfd48567dfdcf23d4329a9f1f11\" pid:4562 exited_at:{seconds:1776986258 nanos:450655321}" Apr 23 23:17:38.544281 containerd[1511]: time="2026-04-23T23:17:38.544142484Z" level=info msg="CreateContainer within sandbox \"0c7b79c69ade6ea9518686a5341247ff819bcb234805ee5f0663597e389a24a9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 23 23:17:38.566426 containerd[1511]: time="2026-04-23T23:17:38.566375546Z" level=info msg="Container 84c5f986c2007ac1285f7f08263172c21350d5966247fa72d746d45a9258a804: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:17:38.575839 containerd[1511]: time="2026-04-23T23:17:38.575675622Z" level=info msg="CreateContainer within sandbox \"0c7b79c69ade6ea9518686a5341247ff819bcb234805ee5f0663597e389a24a9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"84c5f986c2007ac1285f7f08263172c21350d5966247fa72d746d45a9258a804\"" Apr 23 23:17:38.577519 containerd[1511]: time="2026-04-23T23:17:38.577438076Z" level=info msg="StartContainer for \"84c5f986c2007ac1285f7f08263172c21350d5966247fa72d746d45a9258a804\"" Apr 23 23:17:38.579684 containerd[1511]: time="2026-04-23T23:17:38.579643094Z" level=info msg="connecting to shim 84c5f986c2007ac1285f7f08263172c21350d5966247fa72d746d45a9258a804" address="unix:///run/containerd/s/5faaff7a1c7bbdd9db8078e8ac2d99ceaeb7fc32c9b3ba1423841aae85e9a4d1" protocol=ttrpc version=3 Apr 23 23:17:38.602230 systemd[1]: Started cri-containerd-84c5f986c2007ac1285f7f08263172c21350d5966247fa72d746d45a9258a804.scope - libcontainer container 84c5f986c2007ac1285f7f08263172c21350d5966247fa72d746d45a9258a804. Apr 23 23:17:38.642024 containerd[1511]: time="2026-04-23T23:17:38.641269838Z" level=info msg="StartContainer for \"84c5f986c2007ac1285f7f08263172c21350d5966247fa72d746d45a9258a804\" returns successfully" Apr 23 23:17:38.644260 systemd[1]: cri-containerd-84c5f986c2007ac1285f7f08263172c21350d5966247fa72d746d45a9258a804.scope: Deactivated successfully. Apr 23 23:17:38.647553 containerd[1511]: time="2026-04-23T23:17:38.647502849Z" level=info msg="received container exit event container_id:\"84c5f986c2007ac1285f7f08263172c21350d5966247fa72d746d45a9258a804\" id:\"84c5f986c2007ac1285f7f08263172c21350d5966247fa72d746d45a9258a804\" pid:4610 exited_at:{seconds:1776986258 nanos:647060765}" Apr 23 23:17:39.550525 containerd[1511]: time="2026-04-23T23:17:39.550087179Z" level=info msg="CreateContainer within sandbox \"0c7b79c69ade6ea9518686a5341247ff819bcb234805ee5f0663597e389a24a9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 23 23:17:39.563464 containerd[1511]: time="2026-04-23T23:17:39.563419608Z" level=info msg="Container 15717dcbabff89802b0751ac0fe0c1b20048db09cff1883a621822e02e11bc3d: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:17:39.567357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2250976376.mount: Deactivated successfully. Apr 23 23:17:39.576910 containerd[1511]: time="2026-04-23T23:17:39.576866918Z" level=info msg="CreateContainer within sandbox \"0c7b79c69ade6ea9518686a5341247ff819bcb234805ee5f0663597e389a24a9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"15717dcbabff89802b0751ac0fe0c1b20048db09cff1883a621822e02e11bc3d\"" Apr 23 23:17:39.578864 containerd[1511]: time="2026-04-23T23:17:39.578830534Z" level=info msg="StartContainer for \"15717dcbabff89802b0751ac0fe0c1b20048db09cff1883a621822e02e11bc3d\"" Apr 23 23:17:39.580779 containerd[1511]: time="2026-04-23T23:17:39.580749229Z" level=info msg="connecting to shim 15717dcbabff89802b0751ac0fe0c1b20048db09cff1883a621822e02e11bc3d" address="unix:///run/containerd/s/5faaff7a1c7bbdd9db8078e8ac2d99ceaeb7fc32c9b3ba1423841aae85e9a4d1" protocol=ttrpc version=3 Apr 23 23:17:39.605984 systemd[1]: Started cri-containerd-15717dcbabff89802b0751ac0fe0c1b20048db09cff1883a621822e02e11bc3d.scope - libcontainer container 15717dcbabff89802b0751ac0fe0c1b20048db09cff1883a621822e02e11bc3d. Apr 23 23:17:39.678268 systemd[1]: cri-containerd-15717dcbabff89802b0751ac0fe0c1b20048db09cff1883a621822e02e11bc3d.scope: Deactivated successfully. Apr 23 23:17:39.680787 containerd[1511]: time="2026-04-23T23:17:39.680742726Z" level=info msg="StartContainer for \"15717dcbabff89802b0751ac0fe0c1b20048db09cff1883a621822e02e11bc3d\" returns successfully" Apr 23 23:17:39.682332 containerd[1511]: time="2026-04-23T23:17:39.682298738Z" level=info msg="received container exit event container_id:\"15717dcbabff89802b0751ac0fe0c1b20048db09cff1883a621822e02e11bc3d\" id:\"15717dcbabff89802b0751ac0fe0c1b20048db09cff1883a621822e02e11bc3d\" pid:4656 exited_at:{seconds:1776986259 nanos:682142897}" Apr 23 23:17:39.703930 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-15717dcbabff89802b0751ac0fe0c1b20048db09cff1883a621822e02e11bc3d-rootfs.mount: Deactivated successfully. Apr 23 23:17:39.810295 kubelet[2756]: I0423 23:17:39.810108 2756 setters.go:543] "Node became not ready" node="ci-4459-2-4-n-08a122edc2" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-23T23:17:39Z","lastTransitionTime":"2026-04-23T23:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 23 23:17:40.556467 containerd[1511]: time="2026-04-23T23:17:40.556404632Z" level=info msg="CreateContainer within sandbox \"0c7b79c69ade6ea9518686a5341247ff819bcb234805ee5f0663597e389a24a9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 23 23:17:40.571425 containerd[1511]: time="2026-04-23T23:17:40.570947831Z" level=info msg="Container bc1da1bc64129d3cf12912613f5d91d568429de6e519e3b7e9b2dace83b1be16: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:17:40.582027 containerd[1511]: time="2026-04-23T23:17:40.581965240Z" level=info msg="CreateContainer within sandbox \"0c7b79c69ade6ea9518686a5341247ff819bcb234805ee5f0663597e389a24a9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bc1da1bc64129d3cf12912613f5d91d568429de6e519e3b7e9b2dace83b1be16\"" Apr 23 23:17:40.584848 containerd[1511]: time="2026-04-23T23:17:40.584817544Z" level=info msg="StartContainer for \"bc1da1bc64129d3cf12912613f5d91d568429de6e519e3b7e9b2dace83b1be16\"" Apr 23 23:17:40.587035 containerd[1511]: time="2026-04-23T23:17:40.586766400Z" level=info msg="connecting to shim bc1da1bc64129d3cf12912613f5d91d568429de6e519e3b7e9b2dace83b1be16" address="unix:///run/containerd/s/5faaff7a1c7bbdd9db8078e8ac2d99ceaeb7fc32c9b3ba1423841aae85e9a4d1" protocol=ttrpc version=3 Apr 23 23:17:40.610111 systemd[1]: Started cri-containerd-bc1da1bc64129d3cf12912613f5d91d568429de6e519e3b7e9b2dace83b1be16.scope - libcontainer container bc1da1bc64129d3cf12912613f5d91d568429de6e519e3b7e9b2dace83b1be16. Apr 23 23:17:40.638866 systemd[1]: cri-containerd-bc1da1bc64129d3cf12912613f5d91d568429de6e519e3b7e9b2dace83b1be16.scope: Deactivated successfully. Apr 23 23:17:40.643057 containerd[1511]: time="2026-04-23T23:17:40.642977858Z" level=info msg="received container exit event container_id:\"bc1da1bc64129d3cf12912613f5d91d568429de6e519e3b7e9b2dace83b1be16\" id:\"bc1da1bc64129d3cf12912613f5d91d568429de6e519e3b7e9b2dace83b1be16\" pid:4696 exited_at:{seconds:1776986260 nanos:640209676}" Apr 23 23:17:40.645747 containerd[1511]: time="2026-04-23T23:17:40.645006355Z" level=info msg="StartContainer for \"bc1da1bc64129d3cf12912613f5d91d568429de6e519e3b7e9b2dace83b1be16\" returns successfully" Apr 23 23:17:40.672588 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc1da1bc64129d3cf12912613f5d91d568429de6e519e3b7e9b2dace83b1be16-rootfs.mount: Deactivated successfully. Apr 23 23:17:41.023957 kubelet[2756]: E0423 23:17:41.023787 2756 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 23 23:17:41.567639 containerd[1511]: time="2026-04-23T23:17:41.567433718Z" level=info msg="CreateContainer within sandbox \"0c7b79c69ade6ea9518686a5341247ff819bcb234805ee5f0663597e389a24a9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 23 23:17:41.580991 containerd[1511]: time="2026-04-23T23:17:41.580126262Z" level=info msg="Container fb8194db7023ed943aa160ba06ebb7d1bca6f631b7dcd145c026b88a221346dc: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:17:41.582420 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3437585944.mount: Deactivated successfully. Apr 23 23:17:41.592581 containerd[1511]: time="2026-04-23T23:17:41.592535003Z" level=info msg="CreateContainer within sandbox \"0c7b79c69ade6ea9518686a5341247ff819bcb234805ee5f0663597e389a24a9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fb8194db7023ed943aa160ba06ebb7d1bca6f631b7dcd145c026b88a221346dc\"" Apr 23 23:17:41.593431 containerd[1511]: time="2026-04-23T23:17:41.593357130Z" level=info msg="StartContainer for \"fb8194db7023ed943aa160ba06ebb7d1bca6f631b7dcd145c026b88a221346dc\"" Apr 23 23:17:41.594398 containerd[1511]: time="2026-04-23T23:17:41.594355538Z" level=info msg="connecting to shim fb8194db7023ed943aa160ba06ebb7d1bca6f631b7dcd145c026b88a221346dc" address="unix:///run/containerd/s/5faaff7a1c7bbdd9db8078e8ac2d99ceaeb7fc32c9b3ba1423841aae85e9a4d1" protocol=ttrpc version=3 Apr 23 23:17:41.614906 systemd[1]: Started cri-containerd-fb8194db7023ed943aa160ba06ebb7d1bca6f631b7dcd145c026b88a221346dc.scope - libcontainer container fb8194db7023ed943aa160ba06ebb7d1bca6f631b7dcd145c026b88a221346dc. Apr 23 23:17:41.663446 containerd[1511]: time="2026-04-23T23:17:41.663405661Z" level=info msg="StartContainer for \"fb8194db7023ed943aa160ba06ebb7d1bca6f631b7dcd145c026b88a221346dc\" returns successfully" Apr 23 23:17:41.995751 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Apr 23 23:17:42.590305 kubelet[2756]: I0423 23:17:42.590133 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tk5w9" podStartSLOduration=5.5901168949999995 podStartE2EDuration="5.590116895s" podCreationTimestamp="2026-04-23 23:17:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 23:17:42.589760012 +0000 UTC m=+186.837871253" watchObservedRunningTime="2026-04-23 23:17:42.590116895 +0000 UTC m=+186.838228096" Apr 23 23:17:44.839488 systemd-networkd[1430]: lxc_health: Link UP Apr 23 23:17:44.841671 systemd-networkd[1430]: lxc_health: Gained carrier Apr 23 23:17:46.766981 systemd-networkd[1430]: lxc_health: Gained IPv6LL Apr 23 23:17:51.130274 kubelet[2756]: E0423 23:17:51.129838 2756 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:57268->127.0.0.1:44133: write tcp 127.0.0.1:57268->127.0.0.1:44133: write: broken pipe Apr 23 23:17:51.146739 sshd[4537]: Connection closed by 50.85.169.122 port 58866 Apr 23 23:17:51.147684 sshd-session[4492]: pam_unix(sshd:session): session closed for user core Apr 23 23:17:51.153992 systemd[1]: sshd@25-138.199.150.149:22-50.85.169.122:58866.service: Deactivated successfully. Apr 23 23:17:51.157380 systemd[1]: session-26.scope: Deactivated successfully. Apr 23 23:17:51.159015 systemd-logind[1483]: Session 26 logged out. Waiting for processes to exit. Apr 23 23:17:51.161437 systemd-logind[1483]: Removed session 26.