Jan 23 00:06:53.808049 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 23 00:06:53.808072 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Thu Jan 22 22:21:53 -00 2026 Jan 23 00:06:53.810672 kernel: KASLR enabled Jan 23 00:06:53.810682 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jan 23 00:06:53.810688 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390bb018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Jan 23 00:06:53.810694 kernel: random: crng init done Jan 23 00:06:53.810700 kernel: secureboot: Secure boot disabled Jan 23 00:06:53.810706 kernel: ACPI: Early table checksum verification disabled Jan 23 00:06:53.810712 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Jan 23 00:06:53.810718 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jan 23 00:06:53.810725 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 00:06:53.810732 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 00:06:53.810737 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 00:06:53.810743 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 00:06:53.810750 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 00:06:53.810758 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 00:06:53.810764 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 00:06:53.810770 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 00:06:53.810776 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 00:06:53.810782 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Jan 23 00:06:53.810788 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 23 00:06:53.810794 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 23 00:06:53.810801 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jan 23 00:06:53.810807 kernel: NODE_DATA(0) allocated [mem 0x13967da00-0x139684fff] Jan 23 00:06:53.810813 kernel: Zone ranges: Jan 23 00:06:53.810819 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 23 00:06:53.810826 kernel: DMA32 empty Jan 23 00:06:53.810832 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jan 23 00:06:53.810838 kernel: Device empty Jan 23 00:06:53.810844 kernel: Movable zone start for each node Jan 23 00:06:53.810850 kernel: Early memory node ranges Jan 23 00:06:53.810857 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Jan 23 00:06:53.810863 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Jan 23 00:06:53.810869 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Jan 23 00:06:53.810875 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Jan 23 00:06:53.810881 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Jan 23 00:06:53.810887 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Jan 23 00:06:53.810892 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Jan 23 00:06:53.810900 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Jan 23 00:06:53.810906 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Jan 23 00:06:53.810914 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jan 23 00:06:53.810921 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 23 00:06:53.810928 kernel: cma: Reserved 16 MiB at 0x00000000ff000000 on node -1 Jan 23 00:06:53.810935 kernel: psci: probing for conduit method from ACPI. Jan 23 00:06:53.810942 kernel: psci: PSCIv1.1 detected in firmware. Jan 23 00:06:53.810948 kernel: psci: Using standard PSCI v0.2 function IDs Jan 23 00:06:53.810955 kernel: psci: Trusted OS migration not required Jan 23 00:06:53.810961 kernel: psci: SMC Calling Convention v1.1 Jan 23 00:06:53.810968 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 23 00:06:53.810975 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 23 00:06:53.810981 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 23 00:06:53.810987 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 23 00:06:53.810994 kernel: Detected PIPT I-cache on CPU0 Jan 23 00:06:53.811000 kernel: CPU features: detected: GIC system register CPU interface Jan 23 00:06:53.811008 kernel: CPU features: detected: Spectre-v4 Jan 23 00:06:53.811015 kernel: CPU features: detected: Spectre-BHB Jan 23 00:06:53.811021 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 23 00:06:53.811028 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 23 00:06:53.811034 kernel: CPU features: detected: ARM erratum 1418040 Jan 23 00:06:53.811040 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 23 00:06:53.811047 kernel: alternatives: applying boot alternatives Jan 23 00:06:53.811054 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=38aa0560e146398cb8c3378a56d449784f1c7652139d7b61279d764fcc4c793a Jan 23 00:06:53.811061 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 00:06:53.811068 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 00:06:53.811074 kernel: Fallback order for Node 0: 0 Jan 23 00:06:53.812045 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1024000 Jan 23 00:06:53.812054 kernel: Policy zone: Normal Jan 23 00:06:53.812060 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 00:06:53.812067 kernel: software IO TLB: area num 2. Jan 23 00:06:53.812073 kernel: software IO TLB: mapped [mem 0x00000000fb000000-0x00000000ff000000] (64MB) Jan 23 00:06:53.812123 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 00:06:53.812130 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 00:06:53.812137 kernel: rcu: RCU event tracing is enabled. Jan 23 00:06:53.812144 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 00:06:53.812150 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 00:06:53.812157 kernel: Tracing variant of Tasks RCU enabled. Jan 23 00:06:53.812163 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 00:06:53.812173 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 00:06:53.812179 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 00:06:53.812186 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 00:06:53.812193 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 23 00:06:53.812199 kernel: GICv3: 256 SPIs implemented Jan 23 00:06:53.812205 kernel: GICv3: 0 Extended SPIs implemented Jan 23 00:06:53.812212 kernel: Root IRQ handler: gic_handle_irq Jan 23 00:06:53.812218 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 23 00:06:53.812225 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jan 23 00:06:53.812231 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 23 00:06:53.812237 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 23 00:06:53.812246 kernel: ITS@0x0000000008080000: allocated 8192 Devices @100100000 (indirect, esz 8, psz 64K, shr 1) Jan 23 00:06:53.812259 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @100110000 (flat, esz 8, psz 64K, shr 1) Jan 23 00:06:53.812266 kernel: GICv3: using LPI property table @0x0000000100120000 Jan 23 00:06:53.812272 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000100130000 Jan 23 00:06:53.812279 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 00:06:53.812285 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 23 00:06:53.812292 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 23 00:06:53.812298 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 23 00:06:53.812305 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 23 00:06:53.812311 kernel: Console: colour dummy device 80x25 Jan 23 00:06:53.812318 kernel: ACPI: Core revision 20240827 Jan 23 00:06:53.812328 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 23 00:06:53.812335 kernel: pid_max: default: 32768 minimum: 301 Jan 23 00:06:53.812341 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 00:06:53.812348 kernel: landlock: Up and running. Jan 23 00:06:53.812355 kernel: SELinux: Initializing. Jan 23 00:06:53.812362 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 00:06:53.812369 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 00:06:53.812375 kernel: rcu: Hierarchical SRCU implementation. Jan 23 00:06:53.812383 kernel: rcu: Max phase no-delay instances is 400. Jan 23 00:06:53.812391 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 00:06:53.812397 kernel: Remapping and enabling EFI services. Jan 23 00:06:53.812404 kernel: smp: Bringing up secondary CPUs ... Jan 23 00:06:53.812411 kernel: Detected PIPT I-cache on CPU1 Jan 23 00:06:53.812417 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 23 00:06:53.812424 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100140000 Jan 23 00:06:53.812431 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 23 00:06:53.812437 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 23 00:06:53.812444 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 00:06:53.812451 kernel: SMP: Total of 2 processors activated. Jan 23 00:06:53.812464 kernel: CPU: All CPU(s) started at EL1 Jan 23 00:06:53.812471 kernel: CPU features: detected: 32-bit EL0 Support Jan 23 00:06:53.812480 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 23 00:06:53.812487 kernel: CPU features: detected: Common not Private translations Jan 23 00:06:53.812494 kernel: CPU features: detected: CRC32 instructions Jan 23 00:06:53.812501 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 23 00:06:53.812508 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 23 00:06:53.812517 kernel: CPU features: detected: LSE atomic instructions Jan 23 00:06:53.812524 kernel: CPU features: detected: Privileged Access Never Jan 23 00:06:53.812540 kernel: CPU features: detected: RAS Extension Support Jan 23 00:06:53.812547 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 23 00:06:53.812554 kernel: alternatives: applying system-wide alternatives Jan 23 00:06:53.812561 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Jan 23 00:06:53.812569 kernel: Memory: 3858852K/4096000K available (11200K kernel code, 2458K rwdata, 9088K rodata, 39552K init, 1038K bss, 215668K reserved, 16384K cma-reserved) Jan 23 00:06:53.812577 kernel: devtmpfs: initialized Jan 23 00:06:53.812584 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 00:06:53.812593 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 00:06:53.812601 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 23 00:06:53.812608 kernel: 0 pages in range for non-PLT usage Jan 23 00:06:53.812615 kernel: 508400 pages in range for PLT usage Jan 23 00:06:53.812622 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 00:06:53.812629 kernel: SMBIOS 3.0.0 present. Jan 23 00:06:53.812636 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jan 23 00:06:53.812643 kernel: DMI: Memory slots populated: 1/1 Jan 23 00:06:53.812650 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 00:06:53.812659 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 23 00:06:53.812666 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 23 00:06:53.812673 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 23 00:06:53.812680 kernel: audit: initializing netlink subsys (disabled) Jan 23 00:06:53.812687 kernel: audit: type=2000 audit(0.014:1): state=initialized audit_enabled=0 res=1 Jan 23 00:06:53.812694 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 00:06:53.812701 kernel: cpuidle: using governor menu Jan 23 00:06:53.812709 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 23 00:06:53.812716 kernel: ASID allocator initialised with 32768 entries Jan 23 00:06:53.812724 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 00:06:53.812731 kernel: Serial: AMBA PL011 UART driver Jan 23 00:06:53.812738 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 00:06:53.812745 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 00:06:53.812752 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 23 00:06:53.812759 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 23 00:06:53.812766 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 00:06:53.812773 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 00:06:53.812780 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 23 00:06:53.812789 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 23 00:06:53.812796 kernel: ACPI: Added _OSI(Module Device) Jan 23 00:06:53.812803 kernel: ACPI: Added _OSI(Processor Device) Jan 23 00:06:53.812810 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 00:06:53.812817 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 00:06:53.812824 kernel: ACPI: Interpreter enabled Jan 23 00:06:53.812831 kernel: ACPI: Using GIC for interrupt routing Jan 23 00:06:53.812838 kernel: ACPI: MCFG table detected, 1 entries Jan 23 00:06:53.814135 kernel: ACPI: CPU0 has been hot-added Jan 23 00:06:53.814166 kernel: ACPI: CPU1 has been hot-added Jan 23 00:06:53.814174 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 23 00:06:53.814181 kernel: printk: legacy console [ttyAMA0] enabled Jan 23 00:06:53.814188 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 00:06:53.814331 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 00:06:53.814421 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 23 00:06:53.814487 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 23 00:06:53.814564 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 23 00:06:53.814632 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 23 00:06:53.814642 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 23 00:06:53.814652 kernel: PCI host bridge to bus 0000:00 Jan 23 00:06:53.814720 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 23 00:06:53.814775 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 23 00:06:53.814827 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 23 00:06:53.814878 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 00:06:53.814959 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jan 23 00:06:53.815029 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 conventional PCI endpoint Jan 23 00:06:53.815923 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11289000-0x11289fff] Jan 23 00:06:53.816004 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref] Jan 23 00:06:53.816100 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 00:06:53.816168 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11288000-0x11288fff] Jan 23 00:06:53.816237 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 23 00:06:53.816297 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff] Jan 23 00:06:53.816357 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80000fffff 64bit pref] Jan 23 00:06:53.816431 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 00:06:53.816501 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11287000-0x11287fff] Jan 23 00:06:53.816605 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 23 00:06:53.816672 kernel: pci 0000:00:02.1: bridge window [mem 0x10e00000-0x10ffffff] Jan 23 00:06:53.816746 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 00:06:53.816807 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11286000-0x11286fff] Jan 23 00:06:53.816865 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 23 00:06:53.816923 kernel: pci 0000:00:02.2: bridge window [mem 0x10c00000-0x10dfffff] Jan 23 00:06:53.816981 kernel: pci 0000:00:02.2: bridge window [mem 0x8000100000-0x80001fffff 64bit pref] Jan 23 00:06:53.817045 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 00:06:53.817119 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11285000-0x11285fff] Jan 23 00:06:53.817183 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 23 00:06:53.817241 kernel: pci 0000:00:02.3: bridge window [mem 0x10a00000-0x10bfffff] Jan 23 00:06:53.817298 kernel: pci 0000:00:02.3: bridge window [mem 0x8000200000-0x80002fffff 64bit pref] Jan 23 00:06:53.817363 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 00:06:53.817422 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11284000-0x11284fff] Jan 23 00:06:53.817480 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 23 00:06:53.817550 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 23 00:06:53.817616 kernel: pci 0000:00:02.4: bridge window [mem 0x8000300000-0x80003fffff 64bit pref] Jan 23 00:06:53.817686 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 00:06:53.817748 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11283000-0x11283fff] Jan 23 00:06:53.817807 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 23 00:06:53.817867 kernel: pci 0000:00:02.5: bridge window [mem 0x10600000-0x107fffff] Jan 23 00:06:53.817926 kernel: pci 0000:00:02.5: bridge window [mem 0x8000400000-0x80004fffff 64bit pref] Jan 23 00:06:53.817995 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 00:06:53.818056 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11282000-0x11282fff] Jan 23 00:06:53.819491 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 23 00:06:53.819614 kernel: pci 0000:00:02.6: bridge window [mem 0x10400000-0x105fffff] Jan 23 00:06:53.819679 kernel: pci 0000:00:02.6: bridge window [mem 0x8000500000-0x80005fffff 64bit pref] Jan 23 00:06:53.819749 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 00:06:53.819809 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11281000-0x11281fff] Jan 23 00:06:53.819875 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 23 00:06:53.819933 kernel: pci 0000:00:02.7: bridge window [mem 0x10200000-0x103fffff] Jan 23 00:06:53.819999 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 00:06:53.820068 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11280000-0x11280fff] Jan 23 00:06:53.820429 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 23 00:06:53.820493 kernel: pci 0000:00:03.0: bridge window [mem 0x10000000-0x101fffff] Jan 23 00:06:53.820586 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 conventional PCI endpoint Jan 23 00:06:53.820656 kernel: pci 0000:00:04.0: BAR 0 [io 0x0000-0x0007] Jan 23 00:06:53.820733 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Jan 23 00:06:53.820796 kernel: pci 0000:01:00.0: BAR 1 [mem 0x11000000-0x11000fff] Jan 23 00:06:53.820857 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jan 23 00:06:53.820917 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref] Jan 23 00:06:53.820984 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Jan 23 00:06:53.821044 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10e00000-0x10e03fff 64bit] Jan 23 00:06:53.821175 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 PCIe Endpoint Jan 23 00:06:53.821242 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10c00000-0x10c00fff] Jan 23 00:06:53.821309 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000100000-0x8000103fff 64bit pref] Jan 23 00:06:53.821379 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Jan 23 00:06:53.821441 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000200000-0x8000203fff 64bit pref] Jan 23 00:06:53.821508 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Jan 23 00:06:53.821590 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff] Jan 23 00:06:53.821655 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000300000-0x8000303fff 64bit pref] Jan 23 00:06:53.821724 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 PCIe Endpoint Jan 23 00:06:53.821784 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10600000-0x10600fff] Jan 23 00:06:53.821845 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref] Jan 23 00:06:53.821914 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Jan 23 00:06:53.821976 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10400000-0x10400fff] Jan 23 00:06:53.822038 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000500000-0x8000503fff 64bit pref] Jan 23 00:06:53.824132 kernel: pci 0000:07:00.0: ROM [mem 0xfff80000-0xffffffff pref] Jan 23 00:06:53.824251 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 23 00:06:53.824315 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 23 00:06:53.824375 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 23 00:06:53.824438 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 23 00:06:53.824497 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 23 00:06:53.824613 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 23 00:06:53.824683 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 23 00:06:53.824743 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 23 00:06:53.824801 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 23 00:06:53.824863 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 23 00:06:53.824922 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 23 00:06:53.824980 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 23 00:06:53.825046 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 23 00:06:53.825155 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 23 00:06:53.825220 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Jan 23 00:06:53.825283 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 23 00:06:53.825343 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 23 00:06:53.825400 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 23 00:06:53.825464 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 23 00:06:53.825524 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jan 23 00:06:53.825605 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jan 23 00:06:53.825670 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 23 00:06:53.825729 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 23 00:06:53.825788 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 23 00:06:53.825849 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 23 00:06:53.825910 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 23 00:06:53.825968 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 23 00:06:53.826028 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff]: assigned Jan 23 00:06:53.826187 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref]: assigned Jan 23 00:06:53.827918 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff]: assigned Jan 23 00:06:53.827998 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref]: assigned Jan 23 00:06:53.828062 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff]: assigned Jan 23 00:06:53.828148 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref]: assigned Jan 23 00:06:53.828219 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff]: assigned Jan 23 00:06:53.828278 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref]: assigned Jan 23 00:06:53.828339 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff]: assigned Jan 23 00:06:53.828398 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref]: assigned Jan 23 00:06:53.828460 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff]: assigned Jan 23 00:06:53.828519 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref]: assigned Jan 23 00:06:53.828598 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff]: assigned Jan 23 00:06:53.828660 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref]: assigned Jan 23 00:06:53.828722 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff]: assigned Jan 23 00:06:53.828779 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref]: assigned Jan 23 00:06:53.828839 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff]: assigned Jan 23 00:06:53.828897 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref]: assigned Jan 23 00:06:53.828958 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8001200000-0x8001203fff 64bit pref]: assigned Jan 23 00:06:53.829016 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11200000-0x11200fff]: assigned Jan 23 00:06:53.829098 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11201000-0x11201fff]: assigned Jan 23 00:06:53.829167 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Jan 23 00:06:53.829227 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11202000-0x11202fff]: assigned Jan 23 00:06:53.829285 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Jan 23 00:06:53.829346 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11203000-0x11203fff]: assigned Jan 23 00:06:53.829407 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Jan 23 00:06:53.829466 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11204000-0x11204fff]: assigned Jan 23 00:06:53.829524 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Jan 23 00:06:53.829599 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11205000-0x11205fff]: assigned Jan 23 00:06:53.829659 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Jan 23 00:06:53.829718 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11206000-0x11206fff]: assigned Jan 23 00:06:53.829776 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Jan 23 00:06:53.829835 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11207000-0x11207fff]: assigned Jan 23 00:06:53.829902 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Jan 23 00:06:53.829961 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11208000-0x11208fff]: assigned Jan 23 00:06:53.830019 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Jan 23 00:06:53.830120 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11209000-0x11209fff]: assigned Jan 23 00:06:53.830203 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff]: assigned Jan 23 00:06:53.830272 kernel: pci 0000:00:04.0: BAR 0 [io 0xa000-0xa007]: assigned Jan 23 00:06:53.830341 kernel: pci 0000:01:00.0: ROM [mem 0x10000000-0x1007ffff pref]: assigned Jan 23 00:06:53.830403 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jan 23 00:06:53.830467 kernel: pci 0000:01:00.0: BAR 1 [mem 0x10080000-0x10080fff]: assigned Jan 23 00:06:53.830527 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 23 00:06:53.830630 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 23 00:06:53.830691 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jan 23 00:06:53.830750 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 23 00:06:53.830816 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10200000-0x10203fff 64bit]: assigned Jan 23 00:06:53.830875 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 23 00:06:53.830938 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 23 00:06:53.830997 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jan 23 00:06:53.831056 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 23 00:06:53.831588 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref]: assigned Jan 23 00:06:53.831668 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10400000-0x10400fff]: assigned Jan 23 00:06:53.831730 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 23 00:06:53.831790 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 23 00:06:53.831854 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jan 23 00:06:53.831913 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 23 00:06:53.831982 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref]: assigned Jan 23 00:06:53.832044 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 23 00:06:53.832120 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 23 00:06:53.832181 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jan 23 00:06:53.832240 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 23 00:06:53.832310 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000800000-0x8000803fff 64bit pref]: assigned Jan 23 00:06:53.832371 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff]: assigned Jan 23 00:06:53.832431 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 23 00:06:53.832490 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 23 00:06:53.832578 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 23 00:06:53.832640 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 23 00:06:53.832709 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000a00000-0x8000a03fff 64bit pref]: assigned Jan 23 00:06:53.832775 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10a00000-0x10a00fff]: assigned Jan 23 00:06:53.832838 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 23 00:06:53.832912 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 23 00:06:53.832979 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 23 00:06:53.833041 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 23 00:06:53.834433 kernel: pci 0000:07:00.0: ROM [mem 0x10c00000-0x10c7ffff pref]: assigned Jan 23 00:06:53.834511 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000c00000-0x8000c03fff 64bit pref]: assigned Jan 23 00:06:53.834596 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10c80000-0x10c80fff]: assigned Jan 23 00:06:53.834661 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 23 00:06:53.834725 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 23 00:06:53.834784 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 23 00:06:53.834844 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 23 00:06:53.834905 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 23 00:06:53.834968 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 23 00:06:53.835028 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 23 00:06:53.835108 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 23 00:06:53.835176 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 23 00:06:53.835236 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jan 23 00:06:53.835295 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jan 23 00:06:53.835357 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 23 00:06:53.835419 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 23 00:06:53.835473 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 23 00:06:53.835525 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 23 00:06:53.835609 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 23 00:06:53.835674 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 23 00:06:53.835732 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 23 00:06:53.835800 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jan 23 00:06:53.835856 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 23 00:06:53.835911 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 23 00:06:53.835973 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jan 23 00:06:53.836029 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 23 00:06:53.836462 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 23 00:06:53.836599 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 23 00:06:53.836664 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 23 00:06:53.836720 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 23 00:06:53.836783 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jan 23 00:06:53.836837 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 23 00:06:53.836891 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 23 00:06:53.837419 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jan 23 00:06:53.837515 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 23 00:06:53.837594 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 23 00:06:53.837666 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jan 23 00:06:53.837722 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 23 00:06:53.837782 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 23 00:06:53.837846 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jan 23 00:06:53.837906 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 23 00:06:53.837961 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 23 00:06:53.838021 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jan 23 00:06:53.838120 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 23 00:06:53.838188 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 23 00:06:53.838198 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 23 00:06:53.838206 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 23 00:06:53.838217 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 23 00:06:53.838225 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 23 00:06:53.838233 kernel: iommu: Default domain type: Translated Jan 23 00:06:53.838241 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 23 00:06:53.838248 kernel: efivars: Registered efivars operations Jan 23 00:06:53.838257 kernel: vgaarb: loaded Jan 23 00:06:53.838265 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 23 00:06:53.838272 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 00:06:53.838280 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 00:06:53.838289 kernel: pnp: PnP ACPI init Jan 23 00:06:53.838364 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 23 00:06:53.838375 kernel: pnp: PnP ACPI: found 1 devices Jan 23 00:06:53.838383 kernel: NET: Registered PF_INET protocol family Jan 23 00:06:53.838390 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 00:06:53.838398 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 00:06:53.838405 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 00:06:53.838413 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 00:06:53.838422 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 00:06:53.838430 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 00:06:53.838437 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 00:06:53.838445 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 00:06:53.838453 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 00:06:53.838520 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 23 00:06:53.838542 kernel: PCI: CLS 0 bytes, default 64 Jan 23 00:06:53.838550 kernel: kvm [1]: HYP mode not available Jan 23 00:06:53.838557 kernel: Initialise system trusted keyrings Jan 23 00:06:53.838567 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 00:06:53.838575 kernel: Key type asymmetric registered Jan 23 00:06:53.838582 kernel: Asymmetric key parser 'x509' registered Jan 23 00:06:53.838590 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 23 00:06:53.838597 kernel: io scheduler mq-deadline registered Jan 23 00:06:53.838605 kernel: io scheduler kyber registered Jan 23 00:06:53.838612 kernel: io scheduler bfq registered Jan 23 00:06:53.838620 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 23 00:06:53.838688 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jan 23 00:06:53.838751 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jan 23 00:06:53.838811 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 00:06:53.838871 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jan 23 00:06:53.838930 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jan 23 00:06:53.838989 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 00:06:53.839050 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jan 23 00:06:53.839147 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jan 23 00:06:53.839210 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 00:06:53.839277 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jan 23 00:06:53.839336 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jan 23 00:06:53.839395 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 00:06:53.839460 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jan 23 00:06:53.839521 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jan 23 00:06:53.839627 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 00:06:53.839692 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jan 23 00:06:53.839752 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jan 23 00:06:53.840296 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 00:06:53.840375 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jan 23 00:06:53.840436 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jan 23 00:06:53.840495 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 00:06:53.840585 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jan 23 00:06:53.840648 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jan 23 00:06:53.840746 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 00:06:53.840764 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 23 00:06:53.840827 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jan 23 00:06:53.841227 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jan 23 00:06:53.841292 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 00:06:53.841302 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 23 00:06:53.841310 kernel: ACPI: button: Power Button [PWRB] Jan 23 00:06:53.841318 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 23 00:06:53.841382 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 23 00:06:53.841448 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jan 23 00:06:53.841463 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 00:06:53.841471 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 23 00:06:53.841574 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jan 23 00:06:53.841589 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jan 23 00:06:53.841597 kernel: thunder_xcv, ver 1.0 Jan 23 00:06:53.841608 kernel: thunder_bgx, ver 1.0 Jan 23 00:06:53.841615 kernel: nicpf, ver 1.0 Jan 23 00:06:53.841622 kernel: nicvf, ver 1.0 Jan 23 00:06:53.841707 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 23 00:06:53.841768 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-23T00:06:53 UTC (1769126813) Jan 23 00:06:53.841778 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 00:06:53.841786 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jan 23 00:06:53.841793 kernel: NET: Registered PF_INET6 protocol family Jan 23 00:06:53.841800 kernel: watchdog: NMI not fully supported Jan 23 00:06:53.841808 kernel: watchdog: Hard watchdog permanently disabled Jan 23 00:06:53.841815 kernel: Segment Routing with IPv6 Jan 23 00:06:53.841822 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 00:06:53.841832 kernel: NET: Registered PF_PACKET protocol family Jan 23 00:06:53.841839 kernel: Key type dns_resolver registered Jan 23 00:06:53.841846 kernel: registered taskstats version 1 Jan 23 00:06:53.841854 kernel: Loading compiled-in X.509 certificates Jan 23 00:06:53.841861 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 380753d9165686712e58c1d21e00c0268e70f18f' Jan 23 00:06:53.841869 kernel: Demotion targets for Node 0: null Jan 23 00:06:53.841876 kernel: Key type .fscrypt registered Jan 23 00:06:53.841883 kernel: Key type fscrypt-provisioning registered Jan 23 00:06:53.841891 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 00:06:53.841900 kernel: ima: Allocated hash algorithm: sha1 Jan 23 00:06:53.841907 kernel: ima: No architecture policies found Jan 23 00:06:53.841914 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 23 00:06:53.841922 kernel: clk: Disabling unused clocks Jan 23 00:06:53.841929 kernel: PM: genpd: Disabling unused power domains Jan 23 00:06:53.841936 kernel: Warning: unable to open an initial console. Jan 23 00:06:53.841944 kernel: Freeing unused kernel memory: 39552K Jan 23 00:06:53.841951 kernel: Run /init as init process Jan 23 00:06:53.841959 kernel: with arguments: Jan 23 00:06:53.841968 kernel: /init Jan 23 00:06:53.841975 kernel: with environment: Jan 23 00:06:53.841982 kernel: HOME=/ Jan 23 00:06:53.841989 kernel: TERM=linux Jan 23 00:06:53.841998 systemd[1]: Successfully made /usr/ read-only. Jan 23 00:06:53.842009 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 00:06:53.842017 systemd[1]: Detected virtualization kvm. Jan 23 00:06:53.842026 systemd[1]: Detected architecture arm64. Jan 23 00:06:53.842034 systemd[1]: Running in initrd. Jan 23 00:06:53.842041 systemd[1]: No hostname configured, using default hostname. Jan 23 00:06:53.842049 systemd[1]: Hostname set to . Jan 23 00:06:53.842057 systemd[1]: Initializing machine ID from VM UUID. Jan 23 00:06:53.842064 systemd[1]: Queued start job for default target initrd.target. Jan 23 00:06:53.842072 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 00:06:53.842123 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 00:06:53.842142 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 00:06:53.842153 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 00:06:53.842161 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 00:06:53.842170 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 00:06:53.842179 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 00:06:53.842188 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 00:06:53.842196 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 00:06:53.842206 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 00:06:53.842214 systemd[1]: Reached target paths.target - Path Units. Jan 23 00:06:53.842222 systemd[1]: Reached target slices.target - Slice Units. Jan 23 00:06:53.842230 systemd[1]: Reached target swap.target - Swaps. Jan 23 00:06:53.842238 systemd[1]: Reached target timers.target - Timer Units. Jan 23 00:06:53.842246 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 00:06:53.842254 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 00:06:53.842262 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 00:06:53.842270 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 00:06:53.842279 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 00:06:53.842287 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 00:06:53.842295 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 00:06:53.842303 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 00:06:53.842311 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 00:06:53.842319 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 00:06:53.842328 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 00:06:53.842337 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 00:06:53.842346 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 00:06:53.842354 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 00:06:53.842362 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 00:06:53.842370 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:06:53.842378 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 00:06:53.842387 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 00:06:53.842396 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 00:06:53.842405 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 00:06:53.842439 systemd-journald[245]: Collecting audit messages is disabled. Jan 23 00:06:53.842462 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:06:53.842471 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 00:06:53.842480 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 00:06:53.842488 kernel: Bridge firewalling registered Jan 23 00:06:53.842496 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 00:06:53.842504 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 00:06:53.842513 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 00:06:53.842521 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 00:06:53.842541 systemd-journald[245]: Journal started Jan 23 00:06:53.842560 systemd-journald[245]: Runtime Journal (/run/log/journal/07a7becbb3824cc2918302662203c6f5) is 8M, max 76.5M, 68.5M free. Jan 23 00:06:53.794726 systemd-modules-load[247]: Inserted module 'overlay' Jan 23 00:06:53.845307 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 00:06:53.814591 systemd-modules-load[247]: Inserted module 'br_netfilter' Jan 23 00:06:53.849807 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 00:06:53.852810 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 00:06:53.857203 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 00:06:53.859189 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 00:06:53.863313 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 00:06:53.870172 systemd-tmpfiles[281]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 00:06:53.874021 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 00:06:53.876204 dracut-cmdline[279]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=38aa0560e146398cb8c3378a56d449784f1c7652139d7b61279d764fcc4c793a Jan 23 00:06:53.880243 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 00:06:53.918341 systemd-resolved[306]: Positive Trust Anchors: Jan 23 00:06:53.918911 systemd-resolved[306]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 00:06:53.918944 systemd-resolved[306]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 00:06:53.928854 systemd-resolved[306]: Defaulting to hostname 'linux'. Jan 23 00:06:53.930432 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 00:06:53.931880 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 00:06:53.945109 kernel: SCSI subsystem initialized Jan 23 00:06:53.949213 kernel: Loading iSCSI transport class v2.0-870. Jan 23 00:06:53.961158 kernel: iscsi: registered transport (tcp) Jan 23 00:06:53.974130 kernel: iscsi: registered transport (qla4xxx) Jan 23 00:06:53.974179 kernel: QLogic iSCSI HBA Driver Jan 23 00:06:53.997035 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 00:06:54.026503 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 00:06:54.030631 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 00:06:54.087389 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 00:06:54.089429 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 00:06:54.167146 kernel: raid6: neonx8 gen() 15679 MB/s Jan 23 00:06:54.184157 kernel: raid6: neonx4 gen() 15745 MB/s Jan 23 00:06:54.201116 kernel: raid6: neonx2 gen() 13166 MB/s Jan 23 00:06:54.218126 kernel: raid6: neonx1 gen() 10395 MB/s Jan 23 00:06:54.235147 kernel: raid6: int64x8 gen() 6890 MB/s Jan 23 00:06:54.252142 kernel: raid6: int64x4 gen() 7325 MB/s Jan 23 00:06:54.269148 kernel: raid6: int64x2 gen() 6086 MB/s Jan 23 00:06:54.286262 kernel: raid6: int64x1 gen() 5033 MB/s Jan 23 00:06:54.286328 kernel: raid6: using algorithm neonx4 gen() 15745 MB/s Jan 23 00:06:54.303161 kernel: raid6: .... xor() 12313 MB/s, rmw enabled Jan 23 00:06:54.303239 kernel: raid6: using neon recovery algorithm Jan 23 00:06:54.308264 kernel: xor: measuring software checksum speed Jan 23 00:06:54.308340 kernel: 8regs : 20697 MB/sec Jan 23 00:06:54.308367 kernel: 32regs : 21699 MB/sec Jan 23 00:06:54.308403 kernel: arm64_neon : 26989 MB/sec Jan 23 00:06:54.309127 kernel: xor: using function: arm64_neon (26989 MB/sec) Jan 23 00:06:54.362123 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 00:06:54.371121 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 00:06:54.375218 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 00:06:54.407312 systemd-udevd[495]: Using default interface naming scheme 'v255'. Jan 23 00:06:54.411588 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 00:06:54.415258 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 00:06:54.435324 dracut-pre-trigger[506]: rd.md=0: removing MD RAID activation Jan 23 00:06:54.463788 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 00:06:54.466837 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 00:06:54.538282 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 00:06:54.541873 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 00:06:54.649490 kernel: ACPI: bus type USB registered Jan 23 00:06:54.649587 kernel: usbcore: registered new interface driver usbfs Jan 23 00:06:54.652123 kernel: virtio_scsi virtio5: 2/0/0 default/read/poll queues Jan 23 00:06:54.654379 kernel: scsi host0: Virtio SCSI HBA Jan 23 00:06:54.662265 kernel: usbcore: registered new interface driver hub Jan 23 00:06:54.662304 kernel: usbcore: registered new device driver usb Jan 23 00:06:54.664136 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 23 00:06:54.664199 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 23 00:06:54.684735 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 00:06:54.685440 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:06:54.687954 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:06:54.691639 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 23 00:06:54.691818 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 23 00:06:54.692010 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:06:54.696770 kernel: sd 0:0:0:1: Power-on or device reset occurred Jan 23 00:06:54.698308 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 23 00:06:54.698645 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 23 00:06:54.699375 kernel: sd 0:0:0:1: [sda] Write Protect is off Jan 23 00:06:54.699554 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jan 23 00:06:54.700883 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 23 00:06:54.701202 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 23 00:06:54.703459 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 23 00:06:54.703672 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 23 00:06:54.704313 kernel: hub 1-0:1.0: USB hub found Jan 23 00:06:54.704472 kernel: hub 1-0:1.0: 4 ports detected Jan 23 00:06:54.705168 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 23 00:06:54.706104 kernel: hub 2-0:1.0: USB hub found Jan 23 00:06:54.706257 kernel: hub 2-0:1.0: 4 ports detected Jan 23 00:06:54.711205 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 00:06:54.711274 kernel: GPT:17805311 != 80003071 Jan 23 00:06:54.711286 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 00:06:54.711297 kernel: GPT:17805311 != 80003071 Jan 23 00:06:54.711307 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 00:06:54.711317 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 00:06:54.712098 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jan 23 00:06:54.719674 kernel: sr 0:0:0:0: Power-on or device reset occurred Jan 23 00:06:54.723116 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jan 23 00:06:54.723288 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 23 00:06:54.724102 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jan 23 00:06:54.731613 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:06:54.788704 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 23 00:06:54.804732 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 23 00:06:54.812015 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 23 00:06:54.812776 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 23 00:06:54.821058 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 23 00:06:54.823394 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 00:06:54.839872 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 00:06:54.841346 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 00:06:54.842442 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 00:06:54.844063 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 00:06:54.847332 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 00:06:54.857151 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 00:06:54.857197 disk-uuid[602]: Primary Header is updated. Jan 23 00:06:54.857197 disk-uuid[602]: Secondary Entries is updated. Jan 23 00:06:54.857197 disk-uuid[602]: Secondary Header is updated. Jan 23 00:06:54.883422 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 00:06:54.948121 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 23 00:06:55.079708 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 23 00:06:55.079766 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 23 00:06:55.080347 kernel: usbcore: registered new interface driver usbhid Jan 23 00:06:55.081098 kernel: usbhid: USB HID core driver Jan 23 00:06:55.186130 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 23 00:06:55.313112 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 23 00:06:55.366967 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 23 00:06:55.888130 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 00:06:55.889726 disk-uuid[604]: The operation has completed successfully. Jan 23 00:06:55.943202 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 00:06:55.943294 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 00:06:55.975593 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 00:06:56.002116 sh[627]: Success Jan 23 00:06:56.017137 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 00:06:56.017189 kernel: device-mapper: uevent: version 1.0.3 Jan 23 00:06:56.017201 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 00:06:56.029115 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 23 00:06:56.084024 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 00:06:56.092253 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 00:06:56.098029 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 00:06:56.120118 kernel: BTRFS: device fsid 97a43946-ed04-45c1-a355-c0350e8b973e devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (639) Jan 23 00:06:56.122127 kernel: BTRFS info (device dm-0): first mount of filesystem 97a43946-ed04-45c1-a355-c0350e8b973e Jan 23 00:06:56.122202 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 23 00:06:56.130153 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 23 00:06:56.130230 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 00:06:56.130256 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 00:06:56.133031 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 00:06:56.133672 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 00:06:56.135111 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 00:06:56.135900 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 00:06:56.140837 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 00:06:56.173135 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (669) Jan 23 00:06:56.173186 kernel: BTRFS info (device sda6): first mount of filesystem e9ae44b3-0aec-43ca-ad8b-9cf4e242132f Jan 23 00:06:56.174395 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 00:06:56.179194 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 00:06:56.179233 kernel: BTRFS info (device sda6): turning on async discard Jan 23 00:06:56.179243 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 00:06:56.184128 kernel: BTRFS info (device sda6): last unmount of filesystem e9ae44b3-0aec-43ca-ad8b-9cf4e242132f Jan 23 00:06:56.185942 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 00:06:56.190258 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 00:06:56.282819 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 00:06:56.286580 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 00:06:56.325187 ignition[723]: Ignition 2.22.0 Jan 23 00:06:56.325198 ignition[723]: Stage: fetch-offline Jan 23 00:06:56.325226 ignition[723]: no configs at "/usr/lib/ignition/base.d" Jan 23 00:06:56.325235 ignition[723]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 00:06:56.325311 ignition[723]: parsed url from cmdline: "" Jan 23 00:06:56.325314 ignition[723]: no config URL provided Jan 23 00:06:56.325318 ignition[723]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 00:06:56.325324 ignition[723]: no config at "/usr/lib/ignition/user.ign" Jan 23 00:06:56.329364 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 00:06:56.325329 ignition[723]: failed to fetch config: resource requires networking Jan 23 00:06:56.331413 systemd-networkd[812]: lo: Link UP Jan 23 00:06:56.325472 ignition[723]: Ignition finished successfully Jan 23 00:06:56.331416 systemd-networkd[812]: lo: Gained carrier Jan 23 00:06:56.333443 systemd-networkd[812]: Enumeration completed Jan 23 00:06:56.333842 systemd-networkd[812]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:06:56.333845 systemd-networkd[812]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 00:06:56.334251 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 00:06:56.335244 systemd[1]: Reached target network.target - Network. Jan 23 00:06:56.335584 systemd-networkd[812]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:06:56.335587 systemd-networkd[812]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 00:06:56.336059 systemd-networkd[812]: eth0: Link UP Jan 23 00:06:56.336246 systemd-networkd[812]: eth1: Link UP Jan 23 00:06:56.336365 systemd-networkd[812]: eth0: Gained carrier Jan 23 00:06:56.336375 systemd-networkd[812]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:06:56.340210 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 00:06:56.341801 systemd-networkd[812]: eth1: Gained carrier Jan 23 00:06:56.341814 systemd-networkd[812]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:06:56.370662 ignition[818]: Ignition 2.22.0 Jan 23 00:06:56.370674 ignition[818]: Stage: fetch Jan 23 00:06:56.370804 ignition[818]: no configs at "/usr/lib/ignition/base.d" Jan 23 00:06:56.370812 ignition[818]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 00:06:56.370883 ignition[818]: parsed url from cmdline: "" Jan 23 00:06:56.370886 ignition[818]: no config URL provided Jan 23 00:06:56.370890 ignition[818]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 00:06:56.370896 ignition[818]: no config at "/usr/lib/ignition/user.ign" Jan 23 00:06:56.370938 ignition[818]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 23 00:06:56.371661 ignition[818]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 23 00:06:56.385173 systemd-networkd[812]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 23 00:06:56.402169 systemd-networkd[812]: eth0: DHCPv4 address 188.245.94.123/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 23 00:06:56.572452 ignition[818]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 23 00:06:56.581186 ignition[818]: GET result: OK Jan 23 00:06:56.581950 ignition[818]: parsing config with SHA512: 58cd3a0691f951fb6f8f6e18a8cf4a036908b0f820b0775cfd08d8f475bc25ee7bcad7d5235da779179081d18fccabbf236d0c9ac099772c7f427b73a69c5be1 Jan 23 00:06:56.590974 unknown[818]: fetched base config from "system" Jan 23 00:06:56.590984 unknown[818]: fetched base config from "system" Jan 23 00:06:56.591755 ignition[818]: fetch: fetch complete Jan 23 00:06:56.590992 unknown[818]: fetched user config from "hetzner" Jan 23 00:06:56.591765 ignition[818]: fetch: fetch passed Jan 23 00:06:56.591819 ignition[818]: Ignition finished successfully Jan 23 00:06:56.598051 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 00:06:56.600463 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 00:06:56.633486 ignition[825]: Ignition 2.22.0 Jan 23 00:06:56.633518 ignition[825]: Stage: kargs Jan 23 00:06:56.633671 ignition[825]: no configs at "/usr/lib/ignition/base.d" Jan 23 00:06:56.633681 ignition[825]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 00:06:56.634723 ignition[825]: kargs: kargs passed Jan 23 00:06:56.634778 ignition[825]: Ignition finished successfully Jan 23 00:06:56.637710 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 00:06:56.640282 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 00:06:56.669490 ignition[832]: Ignition 2.22.0 Jan 23 00:06:56.669518 ignition[832]: Stage: disks Jan 23 00:06:56.669661 ignition[832]: no configs at "/usr/lib/ignition/base.d" Jan 23 00:06:56.669671 ignition[832]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 00:06:56.670460 ignition[832]: disks: disks passed Jan 23 00:06:56.670539 ignition[832]: Ignition finished successfully Jan 23 00:06:56.673984 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 00:06:56.674776 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 00:06:56.675405 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 00:06:56.675998 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 00:06:56.676580 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 00:06:56.677057 systemd[1]: Reached target basic.target - Basic System. Jan 23 00:06:56.678377 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 00:06:56.719131 systemd-fsck[840]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jan 23 00:06:56.724174 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 00:06:56.726724 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 00:06:56.796149 kernel: EXT4-fs (sda9): mounted filesystem f31390ab-27e9-47d9-a374-053913301d53 r/w with ordered data mode. Quota mode: none. Jan 23 00:06:56.798419 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 00:06:56.801156 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 00:06:56.804174 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 00:06:56.806175 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 00:06:56.809705 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 23 00:06:56.812910 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 00:06:56.814153 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 00:06:56.819480 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 00:06:56.822781 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 00:06:56.831109 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (848) Jan 23 00:06:56.833127 kernel: BTRFS info (device sda6): first mount of filesystem e9ae44b3-0aec-43ca-ad8b-9cf4e242132f Jan 23 00:06:56.833180 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 00:06:56.844099 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 00:06:56.844144 kernel: BTRFS info (device sda6): turning on async discard Jan 23 00:06:56.844156 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 00:06:56.851886 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 00:06:56.869336 coreos-metadata[850]: Jan 23 00:06:56.869 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 23 00:06:56.871015 coreos-metadata[850]: Jan 23 00:06:56.870 INFO Fetch successful Jan 23 00:06:56.871015 coreos-metadata[850]: Jan 23 00:06:56.870 INFO wrote hostname ci-4459-2-2-n-105ad3c88f to /sysroot/etc/hostname Jan 23 00:06:56.873574 initrd-setup-root[875]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 00:06:56.876167 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 00:06:56.881068 initrd-setup-root[883]: cut: /sysroot/etc/group: No such file or directory Jan 23 00:06:56.886665 initrd-setup-root[890]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 00:06:56.892871 initrd-setup-root[897]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 00:06:56.990227 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 00:06:56.992172 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 00:06:56.993657 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 00:06:57.019109 kernel: BTRFS info (device sda6): last unmount of filesystem e9ae44b3-0aec-43ca-ad8b-9cf4e242132f Jan 23 00:06:57.039126 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 00:06:57.050437 ignition[967]: INFO : Ignition 2.22.0 Jan 23 00:06:57.050437 ignition[967]: INFO : Stage: mount Jan 23 00:06:57.051455 ignition[967]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 00:06:57.051455 ignition[967]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 00:06:57.052705 ignition[967]: INFO : mount: mount passed Jan 23 00:06:57.052705 ignition[967]: INFO : Ignition finished successfully Jan 23 00:06:57.054005 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 00:06:57.056756 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 00:06:57.122167 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 00:06:57.126024 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 00:06:57.156126 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (978) Jan 23 00:06:57.157199 kernel: BTRFS info (device sda6): first mount of filesystem e9ae44b3-0aec-43ca-ad8b-9cf4e242132f Jan 23 00:06:57.157241 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 00:06:57.161467 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 00:06:57.161537 kernel: BTRFS info (device sda6): turning on async discard Jan 23 00:06:57.161551 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 00:06:57.164070 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 00:06:57.194305 ignition[995]: INFO : Ignition 2.22.0 Jan 23 00:06:57.196370 ignition[995]: INFO : Stage: files Jan 23 00:06:57.196370 ignition[995]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 00:06:57.196370 ignition[995]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 00:06:57.196370 ignition[995]: DEBUG : files: compiled without relabeling support, skipping Jan 23 00:06:57.202117 ignition[995]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 00:06:57.202117 ignition[995]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 00:06:57.202117 ignition[995]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 00:06:57.202117 ignition[995]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 00:06:57.206859 ignition[995]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 00:06:57.203912 unknown[995]: wrote ssh authorized keys file for user: core Jan 23 00:06:57.208695 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 23 00:06:57.208695 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 23 00:06:57.292982 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 00:06:57.367329 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 23 00:06:57.367329 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 00:06:57.371789 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 23 00:06:57.505411 systemd-networkd[812]: eth1: Gained IPv6LL Jan 23 00:06:57.506001 systemd-networkd[812]: eth0: Gained IPv6LL Jan 23 00:06:57.618727 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 23 00:06:57.799578 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 00:06:57.799578 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 23 00:06:57.802804 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 00:06:57.802804 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 00:06:57.802804 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 00:06:57.802804 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 00:06:57.802804 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 00:06:57.802804 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 00:06:57.802804 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 00:06:57.811868 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 00:06:57.811868 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 00:06:57.811868 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 00:06:57.815370 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 00:06:57.815370 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 00:06:57.815370 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 23 00:06:58.075375 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 23 00:06:58.460731 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 00:06:58.460731 ignition[995]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 23 00:06:58.463554 ignition[995]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 00:06:58.466008 ignition[995]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 00:06:58.466008 ignition[995]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 23 00:06:58.466008 ignition[995]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 23 00:06:58.466008 ignition[995]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 23 00:06:58.466008 ignition[995]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 23 00:06:58.466008 ignition[995]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 23 00:06:58.466008 ignition[995]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 23 00:06:58.466008 ignition[995]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 00:06:58.466008 ignition[995]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 00:06:58.466008 ignition[995]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 00:06:58.466008 ignition[995]: INFO : files: files passed Jan 23 00:06:58.466008 ignition[995]: INFO : Ignition finished successfully Jan 23 00:06:58.467352 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 00:06:58.471798 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 00:06:58.477204 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 00:06:58.487298 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 00:06:58.489106 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 00:06:58.498545 initrd-setup-root-after-ignition[1025]: grep: Jan 23 00:06:58.499417 initrd-setup-root-after-ignition[1029]: grep: Jan 23 00:06:58.500195 initrd-setup-root-after-ignition[1025]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 00:06:58.500195 initrd-setup-root-after-ignition[1025]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 00:06:58.502135 initrd-setup-root-after-ignition[1029]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 00:06:58.505139 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 00:06:58.506343 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 00:06:58.508802 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 00:06:58.565597 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 00:06:58.565803 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 00:06:58.567441 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 00:06:58.568523 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 00:06:58.569774 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 00:06:58.570688 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 00:06:58.614119 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 00:06:58.616978 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 00:06:58.645463 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 00:06:58.646831 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 00:06:58.648157 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 00:06:58.648704 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 00:06:58.648828 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 00:06:58.650864 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 00:06:58.651558 systemd[1]: Stopped target basic.target - Basic System. Jan 23 00:06:58.652653 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 00:06:58.653755 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 00:06:58.654767 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 00:06:58.655800 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 00:06:58.656884 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 00:06:58.657918 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 00:06:58.659105 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 00:06:58.660959 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 00:06:58.662266 systemd[1]: Stopped target swap.target - Swaps. Jan 23 00:06:58.663260 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 00:06:58.663410 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 00:06:58.664923 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 00:06:58.665823 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 00:06:58.666856 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 00:06:58.670161 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 00:06:58.670855 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 00:06:58.670968 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 00:06:58.672616 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 00:06:58.672728 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 00:06:58.674977 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 00:06:58.675095 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 00:06:58.676063 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 23 00:06:58.676173 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 00:06:58.677944 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 00:06:58.683319 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 00:06:58.684557 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 00:06:58.685459 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 00:06:58.687416 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 00:06:58.688364 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 00:06:58.695263 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 00:06:58.695954 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 00:06:58.703911 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 00:06:58.711271 ignition[1049]: INFO : Ignition 2.22.0 Jan 23 00:06:58.711271 ignition[1049]: INFO : Stage: umount Jan 23 00:06:58.714429 ignition[1049]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 00:06:58.714429 ignition[1049]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 00:06:58.714429 ignition[1049]: INFO : umount: umount passed Jan 23 00:06:58.714429 ignition[1049]: INFO : Ignition finished successfully Jan 23 00:06:58.714292 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 00:06:58.714421 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 00:06:58.715204 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 00:06:58.715248 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 00:06:58.716023 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 00:06:58.716062 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 00:06:58.717718 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 00:06:58.717755 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 00:06:58.718668 systemd[1]: Stopped target network.target - Network. Jan 23 00:06:58.726788 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 00:06:58.726889 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 00:06:58.728398 systemd[1]: Stopped target paths.target - Path Units. Jan 23 00:06:58.729367 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 00:06:58.733574 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 00:06:58.739529 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 00:06:58.744170 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 00:06:58.747003 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 00:06:58.747058 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 00:06:58.748195 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 00:06:58.748231 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 00:06:58.749465 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 00:06:58.749575 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 00:06:58.750351 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 00:06:58.750388 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 00:06:58.751944 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 00:06:58.752928 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 00:06:58.754884 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 00:06:58.754968 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 00:06:58.758806 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 00:06:58.758878 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 00:06:58.762330 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 00:06:58.762448 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 00:06:58.765829 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 00:06:58.766063 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 00:06:58.766206 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 00:06:58.769291 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 00:06:58.769861 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 00:06:58.771025 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 00:06:58.771062 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 00:06:58.772835 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 00:06:58.774661 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 00:06:58.774719 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 00:06:58.777777 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 00:06:58.777824 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 00:06:58.779861 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 00:06:58.779903 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 00:06:58.781026 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 00:06:58.781062 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 00:06:58.783448 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 00:06:58.788657 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 00:06:58.788771 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 00:06:58.805695 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 00:06:58.807196 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 00:06:58.809049 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 00:06:58.809191 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 00:06:58.811047 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 00:06:58.812253 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 00:06:58.813467 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 00:06:58.813530 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 00:06:58.814790 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 00:06:58.814857 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 00:06:58.817851 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 00:06:58.817913 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 00:06:58.820347 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 00:06:58.820404 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 00:06:58.822669 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 00:06:58.824206 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 00:06:58.824268 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 00:06:58.826982 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 00:06:58.827040 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 00:06:58.829269 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 23 00:06:58.829322 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 00:06:58.830825 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 00:06:58.830870 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 00:06:58.831575 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 00:06:58.831617 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:06:58.834710 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 23 00:06:58.834764 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jan 23 00:06:58.834791 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 23 00:06:58.834823 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 00:06:58.837969 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 00:06:58.838231 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 00:06:58.840014 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 00:06:58.841940 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 00:06:58.882358 systemd[1]: Switching root. Jan 23 00:06:58.909505 systemd-journald[245]: Journal stopped Jan 23 00:06:59.787145 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Jan 23 00:06:59.787215 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 00:06:59.787229 kernel: SELinux: policy capability open_perms=1 Jan 23 00:06:59.787238 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 00:06:59.787248 kernel: SELinux: policy capability always_check_network=0 Jan 23 00:06:59.787257 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 00:06:59.787269 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 00:06:59.787279 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 00:06:59.787288 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 00:06:59.787297 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 00:06:59.787306 kernel: audit: type=1403 audit(1769126819.054:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 00:06:59.787317 systemd[1]: Successfully loaded SELinux policy in 67.180ms. Jan 23 00:06:59.787338 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.206ms. Jan 23 00:06:59.787350 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 00:06:59.787366 systemd[1]: Detected virtualization kvm. Jan 23 00:06:59.787376 systemd[1]: Detected architecture arm64. Jan 23 00:06:59.787386 systemd[1]: Detected first boot. Jan 23 00:06:59.787396 systemd[1]: Hostname set to . Jan 23 00:06:59.787406 systemd[1]: Initializing machine ID from VM UUID. Jan 23 00:06:59.787416 zram_generator::config[1092]: No configuration found. Jan 23 00:06:59.787428 kernel: NET: Registered PF_VSOCK protocol family Jan 23 00:06:59.787439 systemd[1]: Populated /etc with preset unit settings. Jan 23 00:06:59.787454 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 00:06:59.787464 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 00:06:59.787509 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 00:06:59.787524 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 00:06:59.787540 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 00:06:59.787550 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 00:06:59.787560 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 00:06:59.787573 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 00:06:59.787584 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 00:06:59.787594 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 00:06:59.787604 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 00:06:59.787614 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 00:06:59.787624 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 00:06:59.787635 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 00:06:59.787645 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 00:06:59.787656 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 00:06:59.787667 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 00:06:59.787677 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 00:06:59.787688 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 23 00:06:59.787698 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 00:06:59.787708 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 00:06:59.787719 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 00:06:59.787731 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 00:06:59.787742 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 00:06:59.787752 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 00:06:59.787762 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 00:06:59.787777 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 00:06:59.787788 systemd[1]: Reached target slices.target - Slice Units. Jan 23 00:06:59.787798 systemd[1]: Reached target swap.target - Swaps. Jan 23 00:06:59.787808 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 00:06:59.787819 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 00:06:59.787830 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 00:06:59.787841 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 00:06:59.787851 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 00:06:59.787862 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 00:06:59.787872 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 00:06:59.787885 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 00:06:59.787896 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 00:06:59.787906 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 00:06:59.787916 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 00:06:59.787927 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 00:06:59.787937 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 00:06:59.787948 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 00:06:59.787958 systemd[1]: Reached target machines.target - Containers. Jan 23 00:06:59.787969 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 00:06:59.787981 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 00:06:59.787991 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 00:06:59.788003 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 00:06:59.788013 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 00:06:59.788023 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 00:06:59.788033 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 00:06:59.788043 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 00:06:59.788053 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 00:06:59.788063 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 00:06:59.788073 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 00:06:59.788099 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 00:06:59.788110 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 00:06:59.788120 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 00:06:59.788132 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 00:06:59.788143 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 00:06:59.788155 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 00:06:59.788165 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 00:06:59.788177 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 00:06:59.788187 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 00:06:59.788197 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 00:06:59.788208 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 00:06:59.788219 systemd[1]: Stopped verity-setup.service. Jan 23 00:06:59.788230 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 00:06:59.788240 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 00:06:59.788251 kernel: loop: module loaded Jan 23 00:06:59.788261 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 00:06:59.788271 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 00:06:59.788282 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 00:06:59.788293 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 00:06:59.788303 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 00:06:59.788315 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 00:06:59.788326 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 00:06:59.788336 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 00:06:59.788346 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 00:06:59.788357 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 00:06:59.788368 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 00:06:59.788378 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 00:06:59.788388 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 00:06:59.788400 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 00:06:59.788411 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 00:06:59.788421 kernel: ACPI: bus type drm_connector registered Jan 23 00:06:59.788432 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 00:06:59.788442 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 00:06:59.788453 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 00:06:59.788463 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 00:06:59.788481 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 00:06:59.788495 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 00:06:59.788507 kernel: fuse: init (API version 7.41) Jan 23 00:06:59.788547 systemd-journald[1156]: Collecting audit messages is disabled. Jan 23 00:06:59.788571 systemd-journald[1156]: Journal started Jan 23 00:06:59.788592 systemd-journald[1156]: Runtime Journal (/run/log/journal/07a7becbb3824cc2918302662203c6f5) is 8M, max 76.5M, 68.5M free. Jan 23 00:06:59.535667 systemd[1]: Queued start job for default target multi-user.target. Jan 23 00:06:59.553505 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 23 00:06:59.554060 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 00:06:59.792147 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 00:06:59.794709 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 00:06:59.798269 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 00:06:59.804291 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 00:06:59.807108 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 00:06:59.819895 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 00:06:59.823634 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 00:06:59.828456 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 00:06:59.829843 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 00:06:59.831148 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 00:06:59.831996 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 00:06:59.832192 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 00:06:59.835543 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 00:06:59.837213 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 00:06:59.838238 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 00:06:59.867527 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 00:06:59.877840 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 00:06:59.881157 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 00:06:59.882720 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 00:06:59.888956 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 00:06:59.892694 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 00:06:59.905199 kernel: loop0: detected capacity change from 0 to 100632 Jan 23 00:06:59.918729 systemd-journald[1156]: Time spent on flushing to /var/log/journal/07a7becbb3824cc2918302662203c6f5 is 20.259ms for 1183 entries. Jan 23 00:06:59.918729 systemd-journald[1156]: System Journal (/var/log/journal/07a7becbb3824cc2918302662203c6f5) is 8M, max 584.8M, 576.8M free. Jan 23 00:06:59.947297 systemd-journald[1156]: Received client request to flush runtime journal. Jan 23 00:06:59.947339 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 00:06:59.926100 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 00:06:59.930118 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 00:06:59.936286 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Jan 23 00:06:59.936297 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Jan 23 00:06:59.950462 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 00:06:59.954791 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 00:06:59.965243 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 00:06:59.970229 kernel: loop1: detected capacity change from 0 to 207008 Jan 23 00:06:59.972408 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 00:07:00.010109 kernel: loop2: detected capacity change from 0 to 8 Jan 23 00:07:00.026433 kernel: loop3: detected capacity change from 0 to 119840 Jan 23 00:07:00.026156 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 00:07:00.029282 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 00:07:00.058114 kernel: loop4: detected capacity change from 0 to 100632 Jan 23 00:07:00.074196 kernel: loop5: detected capacity change from 0 to 207008 Jan 23 00:07:00.076423 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Jan 23 00:07:00.076439 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Jan 23 00:07:00.083122 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 00:07:00.095115 kernel: loop6: detected capacity change from 0 to 8 Jan 23 00:07:00.097170 kernel: loop7: detected capacity change from 0 to 119840 Jan 23 00:07:00.119830 (sd-merge)[1239]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 23 00:07:00.120944 (sd-merge)[1239]: Merged extensions into '/usr'. Jan 23 00:07:00.125678 systemd[1]: Reload requested from client PID 1193 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 00:07:00.125697 systemd[1]: Reloading... Jan 23 00:07:00.243194 zram_generator::config[1263]: No configuration found. Jan 23 00:07:00.409515 ldconfig[1179]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 00:07:00.494366 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 00:07:00.494514 systemd[1]: Reloading finished in 368 ms. Jan 23 00:07:00.511773 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 00:07:00.513421 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 00:07:00.527332 systemd[1]: Starting ensure-sysext.service... Jan 23 00:07:00.530269 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 00:07:00.547241 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 00:07:00.556339 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 00:07:00.558516 systemd[1]: Reload requested from client PID 1304 ('systemctl') (unit ensure-sysext.service)... Jan 23 00:07:00.558531 systemd[1]: Reloading... Jan 23 00:07:00.561572 systemd-tmpfiles[1305]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 00:07:00.562217 systemd-tmpfiles[1305]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 00:07:00.562627 systemd-tmpfiles[1305]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 00:07:00.562826 systemd-tmpfiles[1305]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 00:07:00.563485 systemd-tmpfiles[1305]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 00:07:00.563696 systemd-tmpfiles[1305]: ACLs are not supported, ignoring. Jan 23 00:07:00.563742 systemd-tmpfiles[1305]: ACLs are not supported, ignoring. Jan 23 00:07:00.567252 systemd-tmpfiles[1305]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 00:07:00.567407 systemd-tmpfiles[1305]: Skipping /boot Jan 23 00:07:00.575299 systemd-tmpfiles[1305]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 00:07:00.575405 systemd-tmpfiles[1305]: Skipping /boot Jan 23 00:07:00.622830 systemd-udevd[1308]: Using default interface naming scheme 'v255'. Jan 23 00:07:00.627101 zram_generator::config[1333]: No configuration found. Jan 23 00:07:00.853801 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 23 00:07:00.853919 systemd[1]: Reloading finished in 295 ms. Jan 23 00:07:00.871116 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 00:07:00.872811 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 00:07:00.901382 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 00:07:00.906304 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 00:07:00.907908 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 00:07:00.911215 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 00:07:00.918059 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 00:07:00.927379 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 00:07:00.932395 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 00:07:00.945814 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 00:07:00.946596 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 00:07:00.952313 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 00:07:00.954740 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 00:07:00.958210 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 00:07:00.966166 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 00:07:00.972408 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 00:07:00.982778 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 00:07:00.983957 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 00:07:00.984104 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 00:07:00.989895 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 00:07:00.992926 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 00:07:00.993144 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 00:07:00.996766 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 00:07:01.003362 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 00:07:01.006909 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 00:07:01.008187 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 00:07:01.008307 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 00:07:01.008391 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 00:07:01.017628 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 00:07:01.018612 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 00:07:01.021388 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 00:07:01.022275 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 00:07:01.035903 systemd[1]: Finished ensure-sysext.service. Jan 23 00:07:01.038855 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 00:07:01.050386 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 00:07:01.056722 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 00:07:01.056950 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 00:07:01.062230 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 00:07:01.067286 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 00:07:01.067968 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 00:07:01.068013 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 00:07:01.068051 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 00:07:01.068127 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 00:07:01.072911 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 23 00:07:01.076791 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 00:07:01.078155 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 00:07:01.078548 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 00:07:01.081146 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 00:07:01.083369 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 00:07:01.105230 augenrules[1463]: No rules Jan 23 00:07:01.106651 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 00:07:01.122309 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jan 23 00:07:01.122380 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 23 00:07:01.122396 kernel: [drm] features: -context_init Jan 23 00:07:01.130209 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 00:07:01.133279 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 00:07:01.137451 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 23 00:07:01.144560 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 00:07:01.144846 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 00:07:01.147190 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 00:07:01.152342 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 00:07:01.157780 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 00:07:01.162322 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 00:07:01.165334 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 00:07:01.172290 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 00:07:01.174291 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 00:07:01.174345 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 00:07:01.174375 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 00:07:01.198150 kernel: [drm] number of scanouts: 1 Jan 23 00:07:01.198230 kernel: [drm] number of cap sets: 0 Jan 23 00:07:01.222830 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 00:07:01.224159 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 00:07:01.240503 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 00:07:01.242147 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 00:07:01.243330 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 00:07:01.243565 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 00:07:01.245122 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 00:07:01.245192 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 00:07:01.282191 systemd-resolved[1419]: Positive Trust Anchors: Jan 23 00:07:01.282226 systemd-resolved[1419]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 00:07:01.282259 systemd-resolved[1419]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 00:07:01.289962 systemd-resolved[1419]: Using system hostname 'ci-4459-2-2-n-105ad3c88f'. Jan 23 00:07:01.305722 systemd-networkd[1418]: lo: Link UP Jan 23 00:07:01.308498 systemd-networkd[1418]: lo: Gained carrier Jan 23 00:07:01.322980 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 23 00:07:01.325348 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 00:07:01.326321 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 23 00:07:01.328536 systemd-networkd[1418]: Enumeration completed Jan 23 00:07:01.329009 systemd-networkd[1418]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:07:01.329013 systemd-networkd[1418]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 00:07:01.329848 systemd-timesyncd[1455]: No network connectivity, watching for changes. Jan 23 00:07:01.330019 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 00:07:01.331716 systemd[1]: Reached target network.target - Network. Jan 23 00:07:01.331968 systemd-networkd[1418]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:07:01.332037 systemd-networkd[1418]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 00:07:01.332272 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 00:07:01.333140 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 00:07:01.333262 systemd-networkd[1418]: eth0: Link UP Jan 23 00:07:01.333446 systemd-networkd[1418]: eth0: Gained carrier Jan 23 00:07:01.333628 systemd-networkd[1418]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:07:01.334500 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 00:07:01.335175 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 00:07:01.335903 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 00:07:01.336768 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 00:07:01.336797 systemd[1]: Reached target paths.target - Path Units. Jan 23 00:07:01.338129 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 00:07:01.338862 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 00:07:01.339586 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 00:07:01.340263 systemd[1]: Reached target timers.target - Timer Units. Jan 23 00:07:01.343092 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Jan 23 00:07:01.343552 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 00:07:01.345919 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 00:07:01.350250 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 00:07:01.351147 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 00:07:01.353154 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 00:07:01.368130 systemd-networkd[1418]: eth1: Link UP Jan 23 00:07:01.368985 systemd-networkd[1418]: eth1: Gained carrier Jan 23 00:07:01.370115 systemd-networkd[1418]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:07:01.371853 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 00:07:01.373313 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 00:07:01.381119 kernel: Console: switching to colour frame buffer device 160x50 Jan 23 00:07:01.381570 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 00:07:01.409177 systemd-networkd[1418]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 23 00:07:01.409823 systemd-timesyncd[1455]: Network configuration changed, trying to establish connection. Jan 23 00:07:01.414149 systemd-networkd[1418]: eth0: DHCPv4 address 188.245.94.123/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 23 00:07:01.432108 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 23 00:07:01.440453 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 00:07:01.446340 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 00:07:01.449235 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 00:07:01.456956 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 00:07:01.458159 systemd[1]: Reached target basic.target - Basic System. Jan 23 00:07:01.459296 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 00:07:01.459398 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 00:07:01.460717 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 00:07:01.465812 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 00:07:01.469862 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 00:07:01.473523 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 00:07:01.476638 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 00:07:01.483335 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 00:07:01.484714 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 00:07:01.488359 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 00:07:01.498556 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 00:07:01.502579 jq[1516]: false Jan 23 00:07:01.505554 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 23 00:07:01.508550 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 00:07:01.513820 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 00:07:01.520282 extend-filesystems[1517]: Found /dev/sda6 Jan 23 00:07:01.525811 extend-filesystems[1517]: Found /dev/sda9 Jan 23 00:07:01.528500 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 00:07:01.530996 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 00:07:01.531953 extend-filesystems[1517]: Checking size of /dev/sda9 Jan 23 00:07:01.531623 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 00:07:01.537170 systemd-timesyncd[1455]: Contacted time server 116.202.118.202:123 (2.flatcar.pool.ntp.org). Jan 23 00:07:01.537225 systemd-timesyncd[1455]: Initial clock synchronization to Fri 2026-01-23 00:07:01.564446 UTC. Jan 23 00:07:01.539487 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 00:07:01.543899 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 00:07:01.551096 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 00:07:01.555678 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 00:07:01.556762 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 00:07:01.558196 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 00:07:01.578057 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 00:07:01.578366 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 00:07:01.598942 (ntainerd)[1545]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 00:07:01.599888 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:07:01.603200 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 00:07:01.623160 jq[1535]: true Jan 23 00:07:01.631505 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 00:07:01.635966 extend-filesystems[1517]: Resized partition /dev/sda9 Jan 23 00:07:01.639128 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:07:01.646099 extend-filesystems[1560]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 00:07:01.647351 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:07:01.650856 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 23 00:07:01.683190 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 00:07:01.687835 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 00:07:01.707511 tar[1538]: linux-arm64/LICENSE Jan 23 00:07:01.707828 jq[1559]: true Jan 23 00:07:01.710688 coreos-metadata[1511]: Jan 23 00:07:01.710 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 23 00:07:01.717751 tar[1538]: linux-arm64/helm Jan 23 00:07:01.716440 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 00:07:01.715802 dbus-daemon[1514]: [system] SELinux support is enabled Jan 23 00:07:01.726749 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 00:07:01.727145 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 00:07:01.731423 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 00:07:01.731451 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 00:07:01.741602 coreos-metadata[1511]: Jan 23 00:07:01.735 INFO Fetch successful Jan 23 00:07:01.741602 coreos-metadata[1511]: Jan 23 00:07:01.735 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 23 00:07:01.744456 coreos-metadata[1511]: Jan 23 00:07:01.742 INFO Fetch successful Jan 23 00:07:01.744574 update_engine[1532]: I20260123 00:07:01.742892 1532 main.cc:92] Flatcar Update Engine starting Jan 23 00:07:01.751059 systemd[1]: Started update-engine.service - Update Engine. Jan 23 00:07:01.754178 update_engine[1532]: I20260123 00:07:01.751137 1532 update_check_scheduler.cc:74] Next update check in 7m58s Jan 23 00:07:01.781036 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 00:07:01.816102 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 23 00:07:01.835149 extend-filesystems[1560]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 23 00:07:01.835149 extend-filesystems[1560]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 23 00:07:01.835149 extend-filesystems[1560]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 23 00:07:01.851714 extend-filesystems[1517]: Resized filesystem in /dev/sda9 Jan 23 00:07:01.836866 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 00:07:01.837130 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 00:07:01.864192 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:07:01.933257 bash[1602]: Updated "/home/core/.ssh/authorized_keys" Jan 23 00:07:01.937660 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 00:07:01.940012 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 00:07:01.942044 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 00:07:01.946659 systemd[1]: Starting sshkeys.service... Jan 23 00:07:01.969600 systemd-logind[1529]: New seat seat0. Jan 23 00:07:01.971749 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 00:07:01.979014 systemd-logind[1529]: Watching system buttons on /dev/input/event0 (Power Button) Jan 23 00:07:01.979193 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 00:07:01.983418 systemd-logind[1529]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 23 00:07:01.984325 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 00:07:02.040337 coreos-metadata[1610]: Jan 23 00:07:02.040 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 23 00:07:02.042808 coreos-metadata[1610]: Jan 23 00:07:02.042 INFO Fetch successful Jan 23 00:07:02.047440 locksmithd[1574]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 00:07:02.047501 unknown[1610]: wrote ssh authorized keys file for user: core Jan 23 00:07:02.073259 containerd[1545]: time="2026-01-23T00:07:02Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 00:07:02.078832 containerd[1545]: time="2026-01-23T00:07:02.078782299Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 00:07:02.090108 update-ssh-keys[1617]: Updated "/home/core/.ssh/authorized_keys" Jan 23 00:07:02.092175 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 00:07:02.095293 containerd[1545]: time="2026-01-23T00:07:02.092794678Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.622µs" Jan 23 00:07:02.095293 containerd[1545]: time="2026-01-23T00:07:02.092830860Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 00:07:02.095293 containerd[1545]: time="2026-01-23T00:07:02.092852177Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 00:07:02.095293 containerd[1545]: time="2026-01-23T00:07:02.094500913Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 00:07:02.095293 containerd[1545]: time="2026-01-23T00:07:02.094543386Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 00:07:02.095293 containerd[1545]: time="2026-01-23T00:07:02.094573598Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 00:07:02.095293 containerd[1545]: time="2026-01-23T00:07:02.094656579Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 00:07:02.095293 containerd[1545]: time="2026-01-23T00:07:02.094670603Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 00:07:02.095293 containerd[1545]: time="2026-01-23T00:07:02.094916503Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 00:07:02.095293 containerd[1545]: time="2026-01-23T00:07:02.094931209Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 00:07:02.095293 containerd[1545]: time="2026-01-23T00:07:02.094942748Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 00:07:02.095293 containerd[1545]: time="2026-01-23T00:07:02.094951123Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 00:07:02.095511 containerd[1545]: time="2026-01-23T00:07:02.095033423Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 00:07:02.095511 containerd[1545]: time="2026-01-23T00:07:02.095411108Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 00:07:02.095511 containerd[1545]: time="2026-01-23T00:07:02.095446289Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 00:07:02.095511 containerd[1545]: time="2026-01-23T00:07:02.095456586Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 00:07:02.095511 containerd[1545]: time="2026-01-23T00:07:02.095486277Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 00:07:02.099108 containerd[1545]: time="2026-01-23T00:07:02.095810911Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 00:07:02.099108 containerd[1545]: time="2026-01-23T00:07:02.095896297Z" level=info msg="metadata content store policy set" policy=shared Jan 23 00:07:02.097137 systemd[1]: Finished sshkeys.service. Jan 23 00:07:02.101624 containerd[1545]: time="2026-01-23T00:07:02.101570551Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 00:07:02.103140 containerd[1545]: time="2026-01-23T00:07:02.102317267Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 00:07:02.103140 containerd[1545]: time="2026-01-23T00:07:02.102578994Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 00:07:02.103140 containerd[1545]: time="2026-01-23T00:07:02.102616739Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 00:07:02.103140 containerd[1545]: time="2026-01-23T00:07:02.102671192Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 00:07:02.103140 containerd[1545]: time="2026-01-23T00:07:02.102700322Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 00:07:02.103140 containerd[1545]: time="2026-01-23T00:07:02.102743435Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 00:07:02.103140 containerd[1545]: time="2026-01-23T00:07:02.102770481Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 00:07:02.103140 containerd[1545]: time="2026-01-23T00:07:02.102798770Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 00:07:02.103140 containerd[1545]: time="2026-01-23T00:07:02.102822009Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 00:07:02.103140 containerd[1545]: time="2026-01-23T00:07:02.102842965Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 00:07:02.103140 containerd[1545]: time="2026-01-23T00:07:02.102871855Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 00:07:02.104134 containerd[1545]: time="2026-01-23T00:07:02.103841872Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 00:07:02.104134 containerd[1545]: time="2026-01-23T00:07:02.103901694Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 00:07:02.104134 containerd[1545]: time="2026-01-23T00:07:02.103933268Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 00:07:02.104134 containerd[1545]: time="2026-01-23T00:07:02.103964281Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 00:07:02.104134 containerd[1545]: time="2026-01-23T00:07:02.103990125Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 00:07:02.104134 containerd[1545]: time="2026-01-23T00:07:02.104013445Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 00:07:02.104134 containerd[1545]: time="2026-01-23T00:07:02.104037686Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 00:07:02.104134 containerd[1545]: time="2026-01-23T00:07:02.104061247Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 00:07:02.104977 containerd[1545]: time="2026-01-23T00:07:02.104552606Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 00:07:02.104977 containerd[1545]: time="2026-01-23T00:07:02.104600648Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 00:07:02.104977 containerd[1545]: time="2026-01-23T00:07:02.104627053Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 00:07:02.104977 containerd[1545]: time="2026-01-23T00:07:02.104896113Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 00:07:02.104977 containerd[1545]: time="2026-01-23T00:07:02.104928528Z" level=info msg="Start snapshots syncer" Jan 23 00:07:02.105549 containerd[1545]: time="2026-01-23T00:07:02.105495176Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 00:07:02.106480 containerd[1545]: time="2026-01-23T00:07:02.106401645Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 00:07:02.106860 containerd[1545]: time="2026-01-23T00:07:02.106662851Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 00:07:02.106860 containerd[1545]: time="2026-01-23T00:07:02.106808180Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 00:07:02.107178 containerd[1545]: time="2026-01-23T00:07:02.107157777Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 00:07:02.107416 containerd[1545]: time="2026-01-23T00:07:02.107248652Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 00:07:02.107416 containerd[1545]: time="2026-01-23T00:07:02.107269728Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 00:07:02.107416 containerd[1545]: time="2026-01-23T00:07:02.107282750Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 00:07:02.107416 containerd[1545]: time="2026-01-23T00:07:02.107296093Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 00:07:02.107416 containerd[1545]: time="2026-01-23T00:07:02.107306431Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 00:07:02.107416 containerd[1545]: time="2026-01-23T00:07:02.107316929Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 00:07:02.107416 containerd[1545]: time="2026-01-23T00:07:02.107344736Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 00:07:02.107416 containerd[1545]: time="2026-01-23T00:07:02.107356877Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 00:07:02.107416 containerd[1545]: time="2026-01-23T00:07:02.107368256Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 00:07:02.107740 containerd[1545]: time="2026-01-23T00:07:02.107686159Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 00:07:02.107858 containerd[1545]: time="2026-01-23T00:07:02.107722261Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 00:07:02.108115 containerd[1545]: time="2026-01-23T00:07:02.107904172Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 00:07:02.108115 containerd[1545]: time="2026-01-23T00:07:02.107924647Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 00:07:02.108115 containerd[1545]: time="2026-01-23T00:07:02.107932861Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 00:07:02.108115 containerd[1545]: time="2026-01-23T00:07:02.107948167Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 00:07:02.108115 containerd[1545]: time="2026-01-23T00:07:02.107959066Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 00:07:02.108115 containerd[1545]: time="2026-01-23T00:07:02.108035116Z" level=info msg="runtime interface created" Jan 23 00:07:02.108115 containerd[1545]: time="2026-01-23T00:07:02.108040485Z" level=info msg="created NRI interface" Jan 23 00:07:02.108115 containerd[1545]: time="2026-01-23T00:07:02.108048619Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 00:07:02.108115 containerd[1545]: time="2026-01-23T00:07:02.108061360Z" level=info msg="Connect containerd service" Jan 23 00:07:02.108385 containerd[1545]: time="2026-01-23T00:07:02.108100467Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 00:07:02.109991 containerd[1545]: time="2026-01-23T00:07:02.109597986Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 00:07:02.231155 containerd[1545]: time="2026-01-23T00:07:02.230752369Z" level=info msg="Start subscribing containerd event" Jan 23 00:07:02.231155 containerd[1545]: time="2026-01-23T00:07:02.230856307Z" level=info msg="Start recovering state" Jan 23 00:07:02.231155 containerd[1545]: time="2026-01-23T00:07:02.231023312Z" level=info msg="Start event monitor" Jan 23 00:07:02.231155 containerd[1545]: time="2026-01-23T00:07:02.231038819Z" level=info msg="Start cni network conf syncer for default" Jan 23 00:07:02.231155 containerd[1545]: time="2026-01-23T00:07:02.231046392Z" level=info msg="Start streaming server" Jan 23 00:07:02.231155 containerd[1545]: time="2026-01-23T00:07:02.231055047Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 00:07:02.231155 containerd[1545]: time="2026-01-23T00:07:02.231061978Z" level=info msg="runtime interface starting up..." Jan 23 00:07:02.231155 containerd[1545]: time="2026-01-23T00:07:02.231067067Z" level=info msg="starting plugins..." Jan 23 00:07:02.231365 containerd[1545]: time="2026-01-23T00:07:02.231289968Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 00:07:02.231946 containerd[1545]: time="2026-01-23T00:07:02.231509102Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 00:07:02.231946 containerd[1545]: time="2026-01-23T00:07:02.231911590Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 00:07:02.232076 containerd[1545]: time="2026-01-23T00:07:02.232062688Z" level=info msg="containerd successfully booted in 0.160196s" Jan 23 00:07:02.232192 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 00:07:02.384180 tar[1538]: linux-arm64/README.md Jan 23 00:07:02.401209 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 00:07:02.625334 systemd-networkd[1418]: eth0: Gained IPv6LL Jan 23 00:07:02.629871 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 00:07:02.631437 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 00:07:02.636293 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:07:02.638553 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 00:07:02.671501 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 00:07:02.734800 sshd_keygen[1562]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 00:07:02.754215 systemd-networkd[1418]: eth1: Gained IPv6LL Jan 23 00:07:02.756468 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 00:07:02.759352 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 00:07:02.777328 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 00:07:02.777552 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 00:07:02.781320 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 00:07:02.802328 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 00:07:02.806398 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 00:07:02.809374 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 23 00:07:02.810160 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 00:07:03.449228 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:07:03.450722 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 00:07:03.452873 systemd[1]: Startup finished in 2.356s (kernel) + 5.428s (initrd) + 4.465s (userspace) = 12.250s. Jan 23 00:07:03.459850 (kubelet)[1672]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 00:07:03.949921 kubelet[1672]: E0123 00:07:03.949655 1672 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 00:07:03.953971 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 00:07:03.954138 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 00:07:03.955280 systemd[1]: kubelet.service: Consumed 882ms CPU time, 254.3M memory peak. Jan 23 00:07:14.152856 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 00:07:14.156330 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:07:14.311270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:07:14.324627 (kubelet)[1691]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 00:07:14.380633 kubelet[1691]: E0123 00:07:14.380523 1691 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 00:07:14.383992 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 00:07:14.384196 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 00:07:14.385272 systemd[1]: kubelet.service: Consumed 172ms CPU time, 105.9M memory peak. Jan 23 00:07:24.403352 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 00:07:24.407746 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:07:24.561371 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:07:24.574732 (kubelet)[1706]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 00:07:24.633611 kubelet[1706]: E0123 00:07:24.633548 1706 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 00:07:24.636746 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 00:07:24.636899 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 00:07:24.637626 systemd[1]: kubelet.service: Consumed 174ms CPU time, 107.6M memory peak. Jan 23 00:07:34.652858 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 00:07:34.655966 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:07:34.826419 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:07:34.838929 (kubelet)[1722]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 00:07:34.885005 kubelet[1722]: E0123 00:07:34.884955 1722 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 00:07:34.888864 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 00:07:34.889054 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 00:07:34.891211 systemd[1]: kubelet.service: Consumed 163ms CPU time, 104.7M memory peak. Jan 23 00:07:36.767825 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 00:07:36.769584 systemd[1]: Started sshd@0-188.245.94.123:22-68.220.241.50:36866.service - OpenSSH per-connection server daemon (68.220.241.50:36866). Jan 23 00:07:37.432188 sshd[1730]: Accepted publickey for core from 68.220.241.50 port 36866 ssh2: RSA SHA256:wScRSXm5JHKrAeSxAplDhSGBmu9+62e7CgH0oSNisYE Jan 23 00:07:37.434620 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:07:37.442575 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 00:07:37.444107 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 00:07:37.456875 systemd-logind[1529]: New session 1 of user core. Jan 23 00:07:37.472910 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 00:07:37.476444 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 00:07:37.492023 (systemd)[1735]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 00:07:37.496228 systemd-logind[1529]: New session c1 of user core. Jan 23 00:07:37.626867 systemd[1735]: Queued start job for default target default.target. Jan 23 00:07:37.652110 systemd[1735]: Created slice app.slice - User Application Slice. Jan 23 00:07:37.652389 systemd[1735]: Reached target paths.target - Paths. Jan 23 00:07:37.652609 systemd[1735]: Reached target timers.target - Timers. Jan 23 00:07:37.654919 systemd[1735]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 00:07:37.670326 systemd[1735]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 00:07:37.670466 systemd[1735]: Reached target sockets.target - Sockets. Jan 23 00:07:37.670534 systemd[1735]: Reached target basic.target - Basic System. Jan 23 00:07:37.670577 systemd[1735]: Reached target default.target - Main User Target. Jan 23 00:07:37.670612 systemd[1735]: Startup finished in 166ms. Jan 23 00:07:37.670700 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 00:07:37.683414 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 00:07:38.130679 systemd[1]: Started sshd@1-188.245.94.123:22-68.220.241.50:36872.service - OpenSSH per-connection server daemon (68.220.241.50:36872). Jan 23 00:07:38.749568 sshd[1746]: Accepted publickey for core from 68.220.241.50 port 36872 ssh2: RSA SHA256:wScRSXm5JHKrAeSxAplDhSGBmu9+62e7CgH0oSNisYE Jan 23 00:07:38.751651 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:07:38.756272 systemd-logind[1529]: New session 2 of user core. Jan 23 00:07:38.765384 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 00:07:39.175473 sshd[1749]: Connection closed by 68.220.241.50 port 36872 Jan 23 00:07:39.176283 sshd-session[1746]: pam_unix(sshd:session): session closed for user core Jan 23 00:07:39.182799 systemd[1]: sshd@1-188.245.94.123:22-68.220.241.50:36872.service: Deactivated successfully. Jan 23 00:07:39.185484 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 00:07:39.188326 systemd-logind[1529]: Session 2 logged out. Waiting for processes to exit. Jan 23 00:07:39.189758 systemd-logind[1529]: Removed session 2. Jan 23 00:07:39.289303 systemd[1]: Started sshd@2-188.245.94.123:22-68.220.241.50:36886.service - OpenSSH per-connection server daemon (68.220.241.50:36886). Jan 23 00:07:39.917893 sshd[1755]: Accepted publickey for core from 68.220.241.50 port 36886 ssh2: RSA SHA256:wScRSXm5JHKrAeSxAplDhSGBmu9+62e7CgH0oSNisYE Jan 23 00:07:39.919819 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:07:39.924481 systemd-logind[1529]: New session 3 of user core. Jan 23 00:07:39.933477 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 00:07:40.339515 sshd[1758]: Connection closed by 68.220.241.50 port 36886 Jan 23 00:07:40.340337 sshd-session[1755]: pam_unix(sshd:session): session closed for user core Jan 23 00:07:40.345649 systemd[1]: sshd@2-188.245.94.123:22-68.220.241.50:36886.service: Deactivated successfully. Jan 23 00:07:40.347753 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 00:07:40.349958 systemd-logind[1529]: Session 3 logged out. Waiting for processes to exit. Jan 23 00:07:40.351483 systemd-logind[1529]: Removed session 3. Jan 23 00:07:40.456466 systemd[1]: Started sshd@3-188.245.94.123:22-68.220.241.50:36896.service - OpenSSH per-connection server daemon (68.220.241.50:36896). Jan 23 00:07:41.110153 sshd[1764]: Accepted publickey for core from 68.220.241.50 port 36896 ssh2: RSA SHA256:wScRSXm5JHKrAeSxAplDhSGBmu9+62e7CgH0oSNisYE Jan 23 00:07:41.112330 sshd-session[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:07:41.117527 systemd-logind[1529]: New session 4 of user core. Jan 23 00:07:41.128928 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 00:07:41.557123 sshd[1767]: Connection closed by 68.220.241.50 port 36896 Jan 23 00:07:41.555892 sshd-session[1764]: pam_unix(sshd:session): session closed for user core Jan 23 00:07:41.561650 systemd-logind[1529]: Session 4 logged out. Waiting for processes to exit. Jan 23 00:07:41.562668 systemd[1]: sshd@3-188.245.94.123:22-68.220.241.50:36896.service: Deactivated successfully. Jan 23 00:07:41.565781 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 00:07:41.570283 systemd-logind[1529]: Removed session 4. Jan 23 00:07:41.670347 systemd[1]: Started sshd@4-188.245.94.123:22-68.220.241.50:36900.service - OpenSSH per-connection server daemon (68.220.241.50:36900). Jan 23 00:07:42.324124 sshd[1773]: Accepted publickey for core from 68.220.241.50 port 36900 ssh2: RSA SHA256:wScRSXm5JHKrAeSxAplDhSGBmu9+62e7CgH0oSNisYE Jan 23 00:07:42.325509 sshd-session[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:07:42.329883 systemd-logind[1529]: New session 5 of user core. Jan 23 00:07:42.340353 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 00:07:42.677326 sudo[1777]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 00:07:42.677642 sudo[1777]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 00:07:42.690894 sudo[1777]: pam_unix(sudo:session): session closed for user root Jan 23 00:07:42.792697 sshd[1776]: Connection closed by 68.220.241.50 port 36900 Jan 23 00:07:42.792492 sshd-session[1773]: pam_unix(sshd:session): session closed for user core Jan 23 00:07:42.798374 systemd-logind[1529]: Session 5 logged out. Waiting for processes to exit. Jan 23 00:07:42.799371 systemd[1]: sshd@4-188.245.94.123:22-68.220.241.50:36900.service: Deactivated successfully. Jan 23 00:07:42.803776 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 00:07:42.806793 systemd-logind[1529]: Removed session 5. Jan 23 00:07:42.909576 systemd[1]: Started sshd@5-188.245.94.123:22-68.220.241.50:53706.service - OpenSSH per-connection server daemon (68.220.241.50:53706). Jan 23 00:07:43.568800 sshd[1783]: Accepted publickey for core from 68.220.241.50 port 53706 ssh2: RSA SHA256:wScRSXm5JHKrAeSxAplDhSGBmu9+62e7CgH0oSNisYE Jan 23 00:07:43.571304 sshd-session[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:07:43.577164 systemd-logind[1529]: New session 6 of user core. Jan 23 00:07:43.593377 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 00:07:43.915248 sudo[1788]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 00:07:43.915512 sudo[1788]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 00:07:43.920422 sudo[1788]: pam_unix(sudo:session): session closed for user root Jan 23 00:07:43.927026 sudo[1787]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 00:07:43.927917 sudo[1787]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 00:07:43.939455 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 00:07:43.997171 augenrules[1810]: No rules Jan 23 00:07:43.998918 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 00:07:43.999194 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 00:07:44.000880 sudo[1787]: pam_unix(sudo:session): session closed for user root Jan 23 00:07:44.100963 sshd[1786]: Connection closed by 68.220.241.50 port 53706 Jan 23 00:07:44.102205 sshd-session[1783]: pam_unix(sshd:session): session closed for user core Jan 23 00:07:44.109593 systemd-logind[1529]: Session 6 logged out. Waiting for processes to exit. Jan 23 00:07:44.110190 systemd[1]: sshd@5-188.245.94.123:22-68.220.241.50:53706.service: Deactivated successfully. Jan 23 00:07:44.112807 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 00:07:44.114912 systemd-logind[1529]: Removed session 6. Jan 23 00:07:44.218644 systemd[1]: Started sshd@6-188.245.94.123:22-68.220.241.50:53712.service - OpenSSH per-connection server daemon (68.220.241.50:53712). Jan 23 00:07:44.868319 sshd[1819]: Accepted publickey for core from 68.220.241.50 port 53712 ssh2: RSA SHA256:wScRSXm5JHKrAeSxAplDhSGBmu9+62e7CgH0oSNisYE Jan 23 00:07:44.870678 sshd-session[1819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:07:44.876822 systemd-logind[1529]: New session 7 of user core. Jan 23 00:07:44.879264 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 00:07:44.903058 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 23 00:07:44.904932 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:07:45.076210 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:07:45.093622 (kubelet)[1831]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 00:07:45.139521 kubelet[1831]: E0123 00:07:45.139365 1831 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 00:07:45.142611 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 00:07:45.142815 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 00:07:45.143465 systemd[1]: kubelet.service: Consumed 168ms CPU time, 104.8M memory peak. Jan 23 00:07:45.213187 sudo[1838]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 00:07:45.213465 sudo[1838]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 00:07:45.531939 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 00:07:45.549543 (dockerd)[1856]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 00:07:45.785147 dockerd[1856]: time="2026-01-23T00:07:45.783195936Z" level=info msg="Starting up" Jan 23 00:07:45.788061 dockerd[1856]: time="2026-01-23T00:07:45.787987847Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 00:07:45.804194 dockerd[1856]: time="2026-01-23T00:07:45.804135850Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 00:07:45.837343 systemd[1]: var-lib-docker-metacopy\x2dcheck1643638516-merged.mount: Deactivated successfully. Jan 23 00:07:45.847318 dockerd[1856]: time="2026-01-23T00:07:45.847235567Z" level=info msg="Loading containers: start." Jan 23 00:07:45.859102 kernel: Initializing XFRM netlink socket Jan 23 00:07:46.087482 systemd-networkd[1418]: docker0: Link UP Jan 23 00:07:46.092447 dockerd[1856]: time="2026-01-23T00:07:46.092398832Z" level=info msg="Loading containers: done." Jan 23 00:07:46.106241 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3798523757-merged.mount: Deactivated successfully. Jan 23 00:07:46.109931 dockerd[1856]: time="2026-01-23T00:07:46.109883460Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 00:07:46.110040 dockerd[1856]: time="2026-01-23T00:07:46.109974709Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 00:07:46.110129 dockerd[1856]: time="2026-01-23T00:07:46.110074559Z" level=info msg="Initializing buildkit" Jan 23 00:07:46.133746 dockerd[1856]: time="2026-01-23T00:07:46.133699122Z" level=info msg="Completed buildkit initialization" Jan 23 00:07:46.145582 dockerd[1856]: time="2026-01-23T00:07:46.145328525Z" level=info msg="Daemon has completed initialization" Jan 23 00:07:46.145582 dockerd[1856]: time="2026-01-23T00:07:46.145498862Z" level=info msg="API listen on /run/docker.sock" Jan 23 00:07:46.145954 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 00:07:47.204169 update_engine[1532]: I20260123 00:07:47.203521 1532 update_attempter.cc:509] Updating boot flags... Jan 23 00:07:47.244582 containerd[1545]: time="2026-01-23T00:07:47.244142786Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 23 00:07:47.880799 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1414721431.mount: Deactivated successfully. Jan 23 00:07:49.059227 containerd[1545]: time="2026-01-23T00:07:49.059161513Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:49.061120 containerd[1545]: time="2026-01-23T00:07:49.060651276Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=26442080" Jan 23 00:07:49.063661 containerd[1545]: time="2026-01-23T00:07:49.063583838Z" level=info msg="ImageCreate event name:\"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:49.069819 containerd[1545]: time="2026-01-23T00:07:49.069767627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:49.070956 containerd[1545]: time="2026-01-23T00:07:49.070918002Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"26438581\" in 1.826716731s" Jan 23 00:07:49.070956 containerd[1545]: time="2026-01-23T00:07:49.070956925Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\"" Jan 23 00:07:49.071608 containerd[1545]: time="2026-01-23T00:07:49.071577417Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 23 00:07:50.687069 containerd[1545]: time="2026-01-23T00:07:50.686985126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:50.688364 containerd[1545]: time="2026-01-23T00:07:50.688294467Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=22622106" Jan 23 00:07:50.689307 containerd[1545]: time="2026-01-23T00:07:50.689234700Z" level=info msg="ImageCreate event name:\"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:50.691766 containerd[1545]: time="2026-01-23T00:07:50.691706931Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:50.693056 containerd[1545]: time="2026-01-23T00:07:50.692920064Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"24206567\" in 1.621300724s" Jan 23 00:07:50.693056 containerd[1545]: time="2026-01-23T00:07:50.692959147Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\"" Jan 23 00:07:50.693904 containerd[1545]: time="2026-01-23T00:07:50.693834015Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 23 00:07:51.776099 containerd[1545]: time="2026-01-23T00:07:51.775875707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:51.777707 containerd[1545]: time="2026-01-23T00:07:51.777675878Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=17616767" Jan 23 00:07:51.779100 containerd[1545]: time="2026-01-23T00:07:51.778539460Z" level=info msg="ImageCreate event name:\"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:51.781708 containerd[1545]: time="2026-01-23T00:07:51.781684848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:51.783518 containerd[1545]: time="2026-01-23T00:07:51.783491339Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"19201246\" in 1.089614961s" Jan 23 00:07:51.784248 containerd[1545]: time="2026-01-23T00:07:51.784187549Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\"" Jan 23 00:07:51.790133 containerd[1545]: time="2026-01-23T00:07:51.790096737Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 23 00:07:52.748720 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount650709985.mount: Deactivated successfully. Jan 23 00:07:53.044429 containerd[1545]: time="2026-01-23T00:07:53.043334049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:53.044724 containerd[1545]: time="2026-01-23T00:07:53.044425519Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=27558750" Jan 23 00:07:53.045637 containerd[1545]: time="2026-01-23T00:07:53.045348937Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:53.047863 containerd[1545]: time="2026-01-23T00:07:53.047821655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:53.048578 containerd[1545]: time="2026-01-23T00:07:53.048328007Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 1.258190186s" Jan 23 00:07:53.048578 containerd[1545]: time="2026-01-23T00:07:53.048378490Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Jan 23 00:07:53.048845 containerd[1545]: time="2026-01-23T00:07:53.048817758Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 23 00:07:53.648241 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1869240625.mount: Deactivated successfully. Jan 23 00:07:54.373966 containerd[1545]: time="2026-01-23T00:07:54.373907746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:54.375308 containerd[1545]: time="2026-01-23T00:07:54.375262227Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Jan 23 00:07:54.376401 containerd[1545]: time="2026-01-23T00:07:54.376362093Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:54.379257 containerd[1545]: time="2026-01-23T00:07:54.379200342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:54.380409 containerd[1545]: time="2026-01-23T00:07:54.380281447Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.331428246s" Jan 23 00:07:54.380409 containerd[1545]: time="2026-01-23T00:07:54.380311249Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 23 00:07:54.381160 containerd[1545]: time="2026-01-23T00:07:54.380794677Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 00:07:54.901221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount428429863.mount: Deactivated successfully. Jan 23 00:07:54.907740 containerd[1545]: time="2026-01-23T00:07:54.907670281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 00:07:54.909015 containerd[1545]: time="2026-01-23T00:07:54.908984479Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Jan 23 00:07:54.910356 containerd[1545]: time="2026-01-23T00:07:54.909321459Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 00:07:54.911135 containerd[1545]: time="2026-01-23T00:07:54.911105406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 00:07:54.911997 containerd[1545]: time="2026-01-23T00:07:54.911966017Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 531.145899ms" Jan 23 00:07:54.911997 containerd[1545]: time="2026-01-23T00:07:54.911995419Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 23 00:07:54.912855 containerd[1545]: time="2026-01-23T00:07:54.912830109Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 23 00:07:55.152968 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 23 00:07:55.157060 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:07:55.327008 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:07:55.336680 (kubelet)[2221]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 00:07:55.384278 kubelet[2221]: E0123 00:07:55.384192 2221 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 00:07:55.387752 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 00:07:55.388208 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 00:07:55.389145 systemd[1]: kubelet.service: Consumed 160ms CPU time, 107.2M memory peak. Jan 23 00:07:55.521691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1635369726.mount: Deactivated successfully. Jan 23 00:07:57.358518 containerd[1545]: time="2026-01-23T00:07:57.358461827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:57.359698 containerd[1545]: time="2026-01-23T00:07:57.359625804Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943239" Jan 23 00:07:57.360428 containerd[1545]: time="2026-01-23T00:07:57.360384322Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:57.363755 containerd[1545]: time="2026-01-23T00:07:57.363654602Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:57.365890 containerd[1545]: time="2026-01-23T00:07:57.365835830Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.452970079s" Jan 23 00:07:57.366354 containerd[1545]: time="2026-01-23T00:07:57.366039240Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 23 00:08:03.409296 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:08:03.410133 systemd[1]: kubelet.service: Consumed 160ms CPU time, 107.2M memory peak. Jan 23 00:08:03.414404 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:08:03.450465 systemd[1]: Reload requested from client PID 2309 ('systemctl') (unit session-7.scope)... Jan 23 00:08:03.450481 systemd[1]: Reloading... Jan 23 00:08:03.575182 zram_generator::config[2356]: No configuration found. Jan 23 00:08:03.755906 systemd[1]: Reloading finished in 305 ms. Jan 23 00:08:03.816691 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 00:08:03.816795 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 00:08:03.817069 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:08:03.817153 systemd[1]: kubelet.service: Consumed 102ms CPU time, 95M memory peak. Jan 23 00:08:03.819961 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:08:03.964542 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:08:03.976426 (kubelet)[2401]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 00:08:04.024116 kubelet[2401]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 00:08:04.024622 kubelet[2401]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 00:08:04.024680 kubelet[2401]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 00:08:04.024952 kubelet[2401]: I0123 00:08:04.024910 2401 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 00:08:04.464703 kubelet[2401]: I0123 00:08:04.464648 2401 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 00:08:04.466122 kubelet[2401]: I0123 00:08:04.464972 2401 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 00:08:04.466122 kubelet[2401]: I0123 00:08:04.465768 2401 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 00:08:04.505124 kubelet[2401]: E0123 00:08:04.505034 2401 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://188.245.94.123:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 188.245.94.123:6443: connect: connection refused" logger="UnhandledError" Jan 23 00:08:04.506417 kubelet[2401]: I0123 00:08:04.506371 2401 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 00:08:04.514668 kubelet[2401]: I0123 00:08:04.514637 2401 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 00:08:04.517843 kubelet[2401]: I0123 00:08:04.517812 2401 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 00:08:04.518940 kubelet[2401]: I0123 00:08:04.518893 2401 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 00:08:04.519220 kubelet[2401]: I0123 00:08:04.519013 2401 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-2-n-105ad3c88f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 00:08:04.519428 kubelet[2401]: I0123 00:08:04.519415 2401 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 00:08:04.519480 kubelet[2401]: I0123 00:08:04.519472 2401 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 00:08:04.519778 kubelet[2401]: I0123 00:08:04.519758 2401 state_mem.go:36] "Initialized new in-memory state store" Jan 23 00:08:04.523379 kubelet[2401]: I0123 00:08:04.523356 2401 kubelet.go:446] "Attempting to sync node with API server" Jan 23 00:08:04.523479 kubelet[2401]: I0123 00:08:04.523469 2401 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 00:08:04.523546 kubelet[2401]: I0123 00:08:04.523538 2401 kubelet.go:352] "Adding apiserver pod source" Jan 23 00:08:04.523601 kubelet[2401]: I0123 00:08:04.523593 2401 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 00:08:04.527879 kubelet[2401]: W0123 00:08:04.527816 2401 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://188.245.94.123:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-2-n-105ad3c88f&limit=500&resourceVersion=0": dial tcp 188.245.94.123:6443: connect: connection refused Jan 23 00:08:04.527956 kubelet[2401]: E0123 00:08:04.527892 2401 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://188.245.94.123:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-2-n-105ad3c88f&limit=500&resourceVersion=0\": dial tcp 188.245.94.123:6443: connect: connection refused" logger="UnhandledError" Jan 23 00:08:04.529323 kubelet[2401]: W0123 00:08:04.529239 2401 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://188.245.94.123:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 188.245.94.123:6443: connect: connection refused Jan 23 00:08:04.529323 kubelet[2401]: E0123 00:08:04.529301 2401 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://188.245.94.123:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 188.245.94.123:6443: connect: connection refused" logger="UnhandledError" Jan 23 00:08:04.529410 kubelet[2401]: I0123 00:08:04.529401 2401 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 00:08:04.530289 kubelet[2401]: I0123 00:08:04.530006 2401 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 00:08:04.530289 kubelet[2401]: W0123 00:08:04.530157 2401 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 00:08:04.532191 kubelet[2401]: I0123 00:08:04.532161 2401 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 00:08:04.532256 kubelet[2401]: I0123 00:08:04.532210 2401 server.go:1287] "Started kubelet" Jan 23 00:08:04.539200 kubelet[2401]: E0123 00:08:04.538938 2401 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://188.245.94.123:6443/api/v1/namespaces/default/events\": dial tcp 188.245.94.123:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-2-n-105ad3c88f.188d33873b22601f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-2-n-105ad3c88f,UID:ci-4459-2-2-n-105ad3c88f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-2-n-105ad3c88f,},FirstTimestamp:2026-01-23 00:08:04.532183071 +0000 UTC m=+0.551492305,LastTimestamp:2026-01-23 00:08:04.532183071 +0000 UTC m=+0.551492305,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-2-n-105ad3c88f,}" Jan 23 00:08:04.540149 kubelet[2401]: I0123 00:08:04.540124 2401 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 00:08:04.544109 kubelet[2401]: I0123 00:08:04.543450 2401 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 00:08:04.546047 kubelet[2401]: I0123 00:08:04.546014 2401 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 00:08:04.546237 kubelet[2401]: I0123 00:08:04.546216 2401 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 00:08:04.546378 kubelet[2401]: E0123 00:08:04.546346 2401 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-2-2-n-105ad3c88f\" not found" Jan 23 00:08:04.547300 kubelet[2401]: I0123 00:08:04.547279 2401 server.go:479] "Adding debug handlers to kubelet server" Jan 23 00:08:04.548315 kubelet[2401]: I0123 00:08:04.548262 2401 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 00:08:04.548423 kubelet[2401]: I0123 00:08:04.548394 2401 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 00:08:04.548457 kubelet[2401]: I0123 00:08:04.548449 2401 reconciler.go:26] "Reconciler: start to sync state" Jan 23 00:08:04.548667 kubelet[2401]: I0123 00:08:04.548650 2401 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 00:08:04.549325 kubelet[2401]: E0123 00:08:04.549292 2401 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.94.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-2-n-105ad3c88f?timeout=10s\": dial tcp 188.245.94.123:6443: connect: connection refused" interval="200ms" Jan 23 00:08:04.549722 kubelet[2401]: W0123 00:08:04.549695 2401 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://188.245.94.123:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.94.123:6443: connect: connection refused Jan 23 00:08:04.549837 kubelet[2401]: E0123 00:08:04.549816 2401 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://188.245.94.123:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 188.245.94.123:6443: connect: connection refused" logger="UnhandledError" Jan 23 00:08:04.550481 kubelet[2401]: I0123 00:08:04.550453 2401 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 00:08:04.551437 kubelet[2401]: E0123 00:08:04.551402 2401 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 00:08:04.551877 kubelet[2401]: I0123 00:08:04.551844 2401 factory.go:221] Registration of the containerd container factory successfully Jan 23 00:08:04.552021 kubelet[2401]: I0123 00:08:04.552001 2401 factory.go:221] Registration of the systemd container factory successfully Jan 23 00:08:04.562243 kubelet[2401]: I0123 00:08:04.562179 2401 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 00:08:04.563260 kubelet[2401]: I0123 00:08:04.563208 2401 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 00:08:04.563260 kubelet[2401]: I0123 00:08:04.563240 2401 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 00:08:04.563260 kubelet[2401]: I0123 00:08:04.563264 2401 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 00:08:04.563391 kubelet[2401]: I0123 00:08:04.563273 2401 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 00:08:04.563391 kubelet[2401]: E0123 00:08:04.563313 2401 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 00:08:04.575282 kubelet[2401]: W0123 00:08:04.575223 2401 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://188.245.94.123:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.94.123:6443: connect: connection refused Jan 23 00:08:04.575390 kubelet[2401]: E0123 00:08:04.575297 2401 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://188.245.94.123:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 188.245.94.123:6443: connect: connection refused" logger="UnhandledError" Jan 23 00:08:04.582945 kubelet[2401]: I0123 00:08:04.582928 2401 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 00:08:04.583126 kubelet[2401]: I0123 00:08:04.583069 2401 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 00:08:04.583290 kubelet[2401]: I0123 00:08:04.583222 2401 state_mem.go:36] "Initialized new in-memory state store" Jan 23 00:08:04.585228 kubelet[2401]: I0123 00:08:04.585206 2401 policy_none.go:49] "None policy: Start" Jan 23 00:08:04.585382 kubelet[2401]: I0123 00:08:04.585316 2401 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 00:08:04.585382 kubelet[2401]: I0123 00:08:04.585333 2401 state_mem.go:35] "Initializing new in-memory state store" Jan 23 00:08:04.593067 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 00:08:04.605067 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 00:08:04.609235 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 00:08:04.629982 kubelet[2401]: I0123 00:08:04.629192 2401 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 00:08:04.629982 kubelet[2401]: I0123 00:08:04.629503 2401 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 00:08:04.629982 kubelet[2401]: I0123 00:08:04.629530 2401 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 00:08:04.630326 kubelet[2401]: I0123 00:08:04.630025 2401 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 00:08:04.632600 kubelet[2401]: E0123 00:08:04.632570 2401 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 00:08:04.633108 kubelet[2401]: E0123 00:08:04.633058 2401 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-2-2-n-105ad3c88f\" not found" Jan 23 00:08:04.682286 systemd[1]: Created slice kubepods-burstable-pode0cfe3c9b8d257530672feb004f7b876.slice - libcontainer container kubepods-burstable-pode0cfe3c9b8d257530672feb004f7b876.slice. Jan 23 00:08:04.701710 kubelet[2401]: E0123 00:08:04.701642 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-n-105ad3c88f\" not found" node="ci-4459-2-2-n-105ad3c88f" Jan 23 00:08:04.705174 systemd[1]: Created slice kubepods-burstable-pod1f78856ae7c4bfed0e5cd748be12d1fa.slice - libcontainer container kubepods-burstable-pod1f78856ae7c4bfed0e5cd748be12d1fa.slice. Jan 23 00:08:04.708741 kubelet[2401]: E0123 00:08:04.708688 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-n-105ad3c88f\" not found" node="ci-4459-2-2-n-105ad3c88f" Jan 23 00:08:04.711176 systemd[1]: Created slice kubepods-burstable-pode579d9ece7327074ab7ec917159f9048.slice - libcontainer container kubepods-burstable-pode579d9ece7327074ab7ec917159f9048.slice. Jan 23 00:08:04.713027 kubelet[2401]: E0123 00:08:04.712980 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-n-105ad3c88f\" not found" node="ci-4459-2-2-n-105ad3c88f" Jan 23 00:08:04.734223 kubelet[2401]: I0123 00:08:04.733278 2401 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-n-105ad3c88f" Jan 23 00:08:04.734636 kubelet[2401]: E0123 00:08:04.734500 2401 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://188.245.94.123:6443/api/v1/nodes\": dial tcp 188.245.94.123:6443: connect: connection refused" node="ci-4459-2-2-n-105ad3c88f" Jan 23 00:08:04.750229 kubelet[2401]: I0123 00:08:04.750053 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1f78856ae7c4bfed0e5cd748be12d1fa-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-2-n-105ad3c88f\" (UID: \"1f78856ae7c4bfed0e5cd748be12d1fa\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-n-105ad3c88f" Jan 23 00:08:04.750229 kubelet[2401]: I0123 00:08:04.750151 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e579d9ece7327074ab7ec917159f9048-kubeconfig\") pod \"kube-scheduler-ci-4459-2-2-n-105ad3c88f\" (UID: \"e579d9ece7327074ab7ec917159f9048\") " pod="kube-system/kube-scheduler-ci-4459-2-2-n-105ad3c88f" Jan 23 00:08:04.750229 kubelet[2401]: I0123 00:08:04.750198 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e0cfe3c9b8d257530672feb004f7b876-k8s-certs\") pod \"kube-apiserver-ci-4459-2-2-n-105ad3c88f\" (UID: \"e0cfe3c9b8d257530672feb004f7b876\") " pod="kube-system/kube-apiserver-ci-4459-2-2-n-105ad3c88f" Jan 23 00:08:04.750229 kubelet[2401]: I0123 00:08:04.750230 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1f78856ae7c4bfed0e5cd748be12d1fa-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-2-n-105ad3c88f\" (UID: \"1f78856ae7c4bfed0e5cd748be12d1fa\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-n-105ad3c88f" Jan 23 00:08:04.750656 kubelet[2401]: I0123 00:08:04.750260 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1f78856ae7c4bfed0e5cd748be12d1fa-ca-certs\") pod \"kube-controller-manager-ci-4459-2-2-n-105ad3c88f\" (UID: \"1f78856ae7c4bfed0e5cd748be12d1fa\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-n-105ad3c88f" Jan 23 00:08:04.750656 kubelet[2401]: I0123 00:08:04.750303 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1f78856ae7c4bfed0e5cd748be12d1fa-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-2-n-105ad3c88f\" (UID: \"1f78856ae7c4bfed0e5cd748be12d1fa\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-n-105ad3c88f" Jan 23 00:08:04.750656 kubelet[2401]: I0123 00:08:04.750331 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1f78856ae7c4bfed0e5cd748be12d1fa-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-2-n-105ad3c88f\" (UID: \"1f78856ae7c4bfed0e5cd748be12d1fa\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-n-105ad3c88f" Jan 23 00:08:04.750656 kubelet[2401]: I0123 00:08:04.750359 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e0cfe3c9b8d257530672feb004f7b876-ca-certs\") pod \"kube-apiserver-ci-4459-2-2-n-105ad3c88f\" (UID: \"e0cfe3c9b8d257530672feb004f7b876\") " pod="kube-system/kube-apiserver-ci-4459-2-2-n-105ad3c88f" Jan 23 00:08:04.750656 kubelet[2401]: I0123 00:08:04.750386 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e0cfe3c9b8d257530672feb004f7b876-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-2-n-105ad3c88f\" (UID: \"e0cfe3c9b8d257530672feb004f7b876\") " pod="kube-system/kube-apiserver-ci-4459-2-2-n-105ad3c88f" Jan 23 00:08:04.751121 kubelet[2401]: E0123 00:08:04.750597 2401 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.94.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-2-n-105ad3c88f?timeout=10s\": dial tcp 188.245.94.123:6443: connect: connection refused" interval="400ms" Jan 23 00:08:04.937813 kubelet[2401]: I0123 00:08:04.937732 2401 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-n-105ad3c88f" Jan 23 00:08:04.938265 kubelet[2401]: E0123 00:08:04.938223 2401 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://188.245.94.123:6443/api/v1/nodes\": dial tcp 188.245.94.123:6443: connect: connection refused" node="ci-4459-2-2-n-105ad3c88f" Jan 23 00:08:05.004202 containerd[1545]: time="2026-01-23T00:08:05.003958992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-2-n-105ad3c88f,Uid:e0cfe3c9b8d257530672feb004f7b876,Namespace:kube-system,Attempt:0,}" Jan 23 00:08:05.010658 containerd[1545]: time="2026-01-23T00:08:05.010326779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-2-n-105ad3c88f,Uid:1f78856ae7c4bfed0e5cd748be12d1fa,Namespace:kube-system,Attempt:0,}" Jan 23 00:08:05.015021 containerd[1545]: time="2026-01-23T00:08:05.014898233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-2-n-105ad3c88f,Uid:e579d9ece7327074ab7ec917159f9048,Namespace:kube-system,Attempt:0,}" Jan 23 00:08:05.048213 containerd[1545]: time="2026-01-23T00:08:05.048174610Z" level=info msg="connecting to shim 38232f309be9beaa75ed3caf85fc776aca83c44ca7ffe2ec6252d4f64a57b462" address="unix:///run/containerd/s/56dcfce34ce9ece7c3702655efae84b7215032ae686bcf035085fcd647cc0388" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:08:05.059032 containerd[1545]: time="2026-01-23T00:08:05.058517993Z" level=info msg="connecting to shim e3bcd1e18fdd4dfa0428261502716a933b7d63e868273f74837a6ce05655f183" address="unix:///run/containerd/s/05427b624b87960483a7b84c1d4d200af64466c890e5e2996448b6fae24f7d74" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:08:05.068124 containerd[1545]: time="2026-01-23T00:08:05.068048873Z" level=info msg="connecting to shim a6dbc0e162577cfdc40b228298307ea6dee6c54935369e21ec9b443842267fc5" address="unix:///run/containerd/s/edb0ba4b75b7e1cf250330144df889027f6de3776590a95832e813adad21683f" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:08:05.099276 systemd[1]: Started cri-containerd-38232f309be9beaa75ed3caf85fc776aca83c44ca7ffe2ec6252d4f64a57b462.scope - libcontainer container 38232f309be9beaa75ed3caf85fc776aca83c44ca7ffe2ec6252d4f64a57b462. Jan 23 00:08:05.106425 systemd[1]: Started cri-containerd-e3bcd1e18fdd4dfa0428261502716a933b7d63e868273f74837a6ce05655f183.scope - libcontainer container e3bcd1e18fdd4dfa0428261502716a933b7d63e868273f74837a6ce05655f183. Jan 23 00:08:05.129282 systemd[1]: Started cri-containerd-a6dbc0e162577cfdc40b228298307ea6dee6c54935369e21ec9b443842267fc5.scope - libcontainer container a6dbc0e162577cfdc40b228298307ea6dee6c54935369e21ec9b443842267fc5. Jan 23 00:08:05.151945 kubelet[2401]: E0123 00:08:05.151799 2401 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.94.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-2-n-105ad3c88f?timeout=10s\": dial tcp 188.245.94.123:6443: connect: connection refused" interval="800ms" Jan 23 00:08:05.187247 containerd[1545]: time="2026-01-23T00:08:05.187203849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-2-n-105ad3c88f,Uid:1f78856ae7c4bfed0e5cd748be12d1fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3bcd1e18fdd4dfa0428261502716a933b7d63e868273f74837a6ce05655f183\"" Jan 23 00:08:05.189113 containerd[1545]: time="2026-01-23T00:08:05.189065864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-2-n-105ad3c88f,Uid:e0cfe3c9b8d257530672feb004f7b876,Namespace:kube-system,Attempt:0,} returns sandbox id \"38232f309be9beaa75ed3caf85fc776aca83c44ca7ffe2ec6252d4f64a57b462\"" Jan 23 00:08:05.191920 containerd[1545]: time="2026-01-23T00:08:05.191882307Z" level=info msg="CreateContainer within sandbox \"e3bcd1e18fdd4dfa0428261502716a933b7d63e868273f74837a6ce05655f183\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 00:08:05.193918 containerd[1545]: time="2026-01-23T00:08:05.193871245Z" level=info msg="CreateContainer within sandbox \"38232f309be9beaa75ed3caf85fc776aca83c44ca7ffe2ec6252d4f64a57b462\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 00:08:05.202727 containerd[1545]: time="2026-01-23T00:08:05.202654823Z" level=info msg="Container 67884217fdce7d4d57c0179593ee273ddb7aac6a03384b4b882a72b5199b66d8: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:08:05.216947 containerd[1545]: time="2026-01-23T00:08:05.216826879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-2-n-105ad3c88f,Uid:e579d9ece7327074ab7ec917159f9048,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6dbc0e162577cfdc40b228298307ea6dee6c54935369e21ec9b443842267fc5\"" Jan 23 00:08:05.221169 containerd[1545]: time="2026-01-23T00:08:05.220497586Z" level=info msg="CreateContainer within sandbox \"a6dbc0e162577cfdc40b228298307ea6dee6c54935369e21ec9b443842267fc5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 00:08:05.221169 containerd[1545]: time="2026-01-23T00:08:05.221017442Z" level=info msg="CreateContainer within sandbox \"e3bcd1e18fdd4dfa0428261502716a933b7d63e868273f74837a6ce05655f183\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"67884217fdce7d4d57c0179593ee273ddb7aac6a03384b4b882a72b5199b66d8\"" Jan 23 00:08:05.221742 containerd[1545]: time="2026-01-23T00:08:05.221685821Z" level=info msg="StartContainer for \"67884217fdce7d4d57c0179593ee273ddb7aac6a03384b4b882a72b5199b66d8\"" Jan 23 00:08:05.223476 containerd[1545]: time="2026-01-23T00:08:05.223025301Z" level=info msg="connecting to shim 67884217fdce7d4d57c0179593ee273ddb7aac6a03384b4b882a72b5199b66d8" address="unix:///run/containerd/s/05427b624b87960483a7b84c1d4d200af64466c890e5e2996448b6fae24f7d74" protocol=ttrpc version=3 Jan 23 00:08:05.230644 containerd[1545]: time="2026-01-23T00:08:05.230603643Z" level=info msg="Container b63b0d8ceab8dfa14a02fc910055e0f340b36d4d6de22a4164c845562423551a: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:08:05.233505 containerd[1545]: time="2026-01-23T00:08:05.233271801Z" level=info msg="Container d4f115c3159485a2fc584bb80ff1b8bde970c0e0edc33c90aa3730b4e51d7ec7: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:08:05.241875 containerd[1545]: time="2026-01-23T00:08:05.241831532Z" level=info msg="CreateContainer within sandbox \"38232f309be9beaa75ed3caf85fc776aca83c44ca7ffe2ec6252d4f64a57b462\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b63b0d8ceab8dfa14a02fc910055e0f340b36d4d6de22a4164c845562423551a\"" Jan 23 00:08:05.243827 containerd[1545]: time="2026-01-23T00:08:05.243782150Z" level=info msg="StartContainer for \"b63b0d8ceab8dfa14a02fc910055e0f340b36d4d6de22a4164c845562423551a\"" Jan 23 00:08:05.244468 containerd[1545]: time="2026-01-23T00:08:05.244430529Z" level=info msg="CreateContainer within sandbox \"a6dbc0e162577cfdc40b228298307ea6dee6c54935369e21ec9b443842267fc5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d4f115c3159485a2fc584bb80ff1b8bde970c0e0edc33c90aa3730b4e51d7ec7\"" Jan 23 00:08:05.244929 containerd[1545]: time="2026-01-23T00:08:05.244868422Z" level=info msg="connecting to shim b63b0d8ceab8dfa14a02fc910055e0f340b36d4d6de22a4164c845562423551a" address="unix:///run/containerd/s/56dcfce34ce9ece7c3702655efae84b7215032ae686bcf035085fcd647cc0388" protocol=ttrpc version=3 Jan 23 00:08:05.245439 containerd[1545]: time="2026-01-23T00:08:05.245416198Z" level=info msg="StartContainer for \"d4f115c3159485a2fc584bb80ff1b8bde970c0e0edc33c90aa3730b4e51d7ec7\"" Jan 23 00:08:05.247301 systemd[1]: Started cri-containerd-67884217fdce7d4d57c0179593ee273ddb7aac6a03384b4b882a72b5199b66d8.scope - libcontainer container 67884217fdce7d4d57c0179593ee273ddb7aac6a03384b4b882a72b5199b66d8. Jan 23 00:08:05.249238 containerd[1545]: time="2026-01-23T00:08:05.248953541Z" level=info msg="connecting to shim d4f115c3159485a2fc584bb80ff1b8bde970c0e0edc33c90aa3730b4e51d7ec7" address="unix:///run/containerd/s/edb0ba4b75b7e1cf250330144df889027f6de3776590a95832e813adad21683f" protocol=ttrpc version=3 Jan 23 00:08:05.272449 systemd[1]: Started cri-containerd-b63b0d8ceab8dfa14a02fc910055e0f340b36d4d6de22a4164c845562423551a.scope - libcontainer container b63b0d8ceab8dfa14a02fc910055e0f340b36d4d6de22a4164c845562423551a. Jan 23 00:08:05.281409 systemd[1]: Started cri-containerd-d4f115c3159485a2fc584bb80ff1b8bde970c0e0edc33c90aa3730b4e51d7ec7.scope - libcontainer container d4f115c3159485a2fc584bb80ff1b8bde970c0e0edc33c90aa3730b4e51d7ec7. Jan 23 00:08:05.347920 kubelet[2401]: I0123 00:08:05.347891 2401 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-n-105ad3c88f" Jan 23 00:08:05.349157 containerd[1545]: time="2026-01-23T00:08:05.349006397Z" level=info msg="StartContainer for \"67884217fdce7d4d57c0179593ee273ddb7aac6a03384b4b882a72b5199b66d8\" returns successfully" Jan 23 00:08:05.349981 kubelet[2401]: E0123 00:08:05.349952 2401 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://188.245.94.123:6443/api/v1/nodes\": dial tcp 188.245.94.123:6443: connect: connection refused" node="ci-4459-2-2-n-105ad3c88f" Jan 23 00:08:05.353666 containerd[1545]: time="2026-01-23T00:08:05.353636773Z" level=info msg="StartContainer for \"b63b0d8ceab8dfa14a02fc910055e0f340b36d4d6de22a4164c845562423551a\" returns successfully" Jan 23 00:08:05.373322 containerd[1545]: time="2026-01-23T00:08:05.373280990Z" level=info msg="StartContainer for \"d4f115c3159485a2fc584bb80ff1b8bde970c0e0edc33c90aa3730b4e51d7ec7\" returns successfully" Jan 23 00:08:05.422283 kubelet[2401]: W0123 00:08:05.422223 2401 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://188.245.94.123:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-2-n-105ad3c88f&limit=500&resourceVersion=0": dial tcp 188.245.94.123:6443: connect: connection refused Jan 23 00:08:05.423038 kubelet[2401]: E0123 00:08:05.422997 2401 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://188.245.94.123:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-2-n-105ad3c88f&limit=500&resourceVersion=0\": dial tcp 188.245.94.123:6443: connect: connection refused" logger="UnhandledError" Jan 23 00:08:05.583553 kubelet[2401]: E0123 00:08:05.583424 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-n-105ad3c88f\" not found" node="ci-4459-2-2-n-105ad3c88f" Jan 23 00:08:05.586358 kubelet[2401]: E0123 00:08:05.586281 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-n-105ad3c88f\" not found" node="ci-4459-2-2-n-105ad3c88f" Jan 23 00:08:05.589680 kubelet[2401]: E0123 00:08:05.589648 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-n-105ad3c88f\" not found" node="ci-4459-2-2-n-105ad3c88f" Jan 23 00:08:06.154422 kubelet[2401]: I0123 00:08:06.154268 2401 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-n-105ad3c88f" Jan 23 00:08:06.591537 kubelet[2401]: E0123 00:08:06.591432 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-n-105ad3c88f\" not found" node="ci-4459-2-2-n-105ad3c88f" Jan 23 00:08:06.592149 kubelet[2401]: E0123 00:08:06.591837 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-n-105ad3c88f\" not found" node="ci-4459-2-2-n-105ad3c88f" Jan 23 00:08:07.244106 kubelet[2401]: E0123 00:08:07.242905 2401 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459-2-2-n-105ad3c88f\" not found" node="ci-4459-2-2-n-105ad3c88f" Jan 23 00:08:07.248117 kubelet[2401]: I0123 00:08:07.247523 2401 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-2-n-105ad3c88f" Jan 23 00:08:07.248117 kubelet[2401]: E0123 00:08:07.247562 2401 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4459-2-2-n-105ad3c88f\": node \"ci-4459-2-2-n-105ad3c88f\" not found" Jan 23 00:08:07.305630 kubelet[2401]: E0123 00:08:07.305588 2401 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-2-2-n-105ad3c88f\" not found" Jan 23 00:08:07.405836 kubelet[2401]: E0123 00:08:07.405792 2401 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-2-2-n-105ad3c88f\" not found" Jan 23 00:08:07.506550 kubelet[2401]: E0123 00:08:07.506426 2401 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-2-2-n-105ad3c88f\" not found" Jan 23 00:08:07.607224 kubelet[2401]: E0123 00:08:07.607188 2401 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-2-2-n-105ad3c88f\" not found" Jan 23 00:08:07.647787 kubelet[2401]: I0123 00:08:07.647561 2401 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-2-n-105ad3c88f" Jan 23 00:08:07.658712 kubelet[2401]: E0123 00:08:07.658681 2401 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-2-n-105ad3c88f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-2-2-n-105ad3c88f" Jan 23 00:08:07.659071 kubelet[2401]: I0123 00:08:07.658854 2401 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-2-n-105ad3c88f" Jan 23 00:08:07.662141 kubelet[2401]: E0123 00:08:07.662116 2401 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-2-n-105ad3c88f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-2-2-n-105ad3c88f" Jan 23 00:08:07.662398 kubelet[2401]: I0123 00:08:07.662213 2401 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-2-n-105ad3c88f" Jan 23 00:08:07.664068 kubelet[2401]: E0123 00:08:07.664047 2401 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-2-n-105ad3c88f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459-2-2-n-105ad3c88f" Jan 23 00:08:08.529922 kubelet[2401]: I0123 00:08:08.529883 2401 apiserver.go:52] "Watching apiserver" Jan 23 00:08:08.549487 kubelet[2401]: I0123 00:08:08.549446 2401 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 00:08:09.425336 systemd[1]: Reload requested from client PID 2671 ('systemctl') (unit session-7.scope)... Jan 23 00:08:09.425355 systemd[1]: Reloading... Jan 23 00:08:09.579121 zram_generator::config[2721]: No configuration found. Jan 23 00:08:09.791403 systemd[1]: Reloading finished in 365 ms. Jan 23 00:08:09.822110 kubelet[2401]: I0123 00:08:09.822011 2401 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 00:08:09.822369 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:08:09.845846 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 00:08:09.848202 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:08:09.848266 systemd[1]: kubelet.service: Consumed 956ms CPU time, 128.1M memory peak. Jan 23 00:08:09.854699 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:08:10.023863 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:08:10.031613 (kubelet)[2760]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 00:08:10.081786 kubelet[2760]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 00:08:10.081786 kubelet[2760]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 00:08:10.081786 kubelet[2760]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 00:08:10.082146 kubelet[2760]: I0123 00:08:10.081775 2760 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 00:08:10.093150 kubelet[2760]: I0123 00:08:10.092672 2760 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 00:08:10.093150 kubelet[2760]: I0123 00:08:10.092702 2760 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 00:08:10.093150 kubelet[2760]: I0123 00:08:10.092964 2760 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 00:08:10.094470 kubelet[2760]: I0123 00:08:10.094438 2760 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 23 00:08:10.097205 kubelet[2760]: I0123 00:08:10.097118 2760 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 00:08:10.102771 kubelet[2760]: I0123 00:08:10.102725 2760 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 00:08:10.107120 kubelet[2760]: I0123 00:08:10.106998 2760 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 00:08:10.107947 kubelet[2760]: I0123 00:08:10.107871 2760 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 00:08:10.108192 kubelet[2760]: I0123 00:08:10.107916 2760 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-2-n-105ad3c88f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 00:08:10.108192 kubelet[2760]: I0123 00:08:10.108101 2760 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 00:08:10.108192 kubelet[2760]: I0123 00:08:10.108111 2760 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 00:08:10.108192 kubelet[2760]: I0123 00:08:10.108168 2760 state_mem.go:36] "Initialized new in-memory state store" Jan 23 00:08:10.108515 kubelet[2760]: I0123 00:08:10.108317 2760 kubelet.go:446] "Attempting to sync node with API server" Jan 23 00:08:10.108515 kubelet[2760]: I0123 00:08:10.108332 2760 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 00:08:10.108515 kubelet[2760]: I0123 00:08:10.108354 2760 kubelet.go:352] "Adding apiserver pod source" Jan 23 00:08:10.108515 kubelet[2760]: I0123 00:08:10.108380 2760 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 00:08:10.120507 kubelet[2760]: I0123 00:08:10.120467 2760 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 00:08:10.121261 kubelet[2760]: I0123 00:08:10.121065 2760 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 00:08:10.121719 kubelet[2760]: I0123 00:08:10.121682 2760 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 00:08:10.121769 kubelet[2760]: I0123 00:08:10.121724 2760 server.go:1287] "Started kubelet" Jan 23 00:08:10.126500 kubelet[2760]: I0123 00:08:10.125502 2760 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 00:08:10.127630 kubelet[2760]: I0123 00:08:10.127577 2760 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 00:08:10.127880 kubelet[2760]: I0123 00:08:10.127863 2760 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 00:08:10.129794 kubelet[2760]: I0123 00:08:10.128478 2760 server.go:479] "Adding debug handlers to kubelet server" Jan 23 00:08:10.135103 kubelet[2760]: I0123 00:08:10.134375 2760 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 00:08:10.142036 kubelet[2760]: I0123 00:08:10.142003 2760 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 00:08:10.147122 kubelet[2760]: I0123 00:08:10.147065 2760 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 00:08:10.147420 kubelet[2760]: E0123 00:08:10.147366 2760 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-2-2-n-105ad3c88f\" not found" Jan 23 00:08:10.147883 kubelet[2760]: I0123 00:08:10.147852 2760 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 00:08:10.149648 kubelet[2760]: I0123 00:08:10.147967 2760 reconciler.go:26] "Reconciler: start to sync state" Jan 23 00:08:10.153091 kubelet[2760]: I0123 00:08:10.152309 2760 factory.go:221] Registration of the systemd container factory successfully Jan 23 00:08:10.153331 kubelet[2760]: I0123 00:08:10.153306 2760 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 00:08:10.157789 kubelet[2760]: E0123 00:08:10.157765 2760 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 00:08:10.168215 kubelet[2760]: I0123 00:08:10.168185 2760 factory.go:221] Registration of the containerd container factory successfully Jan 23 00:08:10.169573 kubelet[2760]: I0123 00:08:10.169542 2760 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 00:08:10.172569 kubelet[2760]: I0123 00:08:10.172539 2760 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 00:08:10.172569 kubelet[2760]: I0123 00:08:10.172564 2760 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 00:08:10.172653 kubelet[2760]: I0123 00:08:10.172583 2760 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 00:08:10.172653 kubelet[2760]: I0123 00:08:10.172589 2760 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 00:08:10.172653 kubelet[2760]: E0123 00:08:10.172629 2760 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 00:08:10.217551 kubelet[2760]: I0123 00:08:10.217375 2760 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 00:08:10.217551 kubelet[2760]: I0123 00:08:10.217399 2760 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 00:08:10.217551 kubelet[2760]: I0123 00:08:10.217421 2760 state_mem.go:36] "Initialized new in-memory state store" Jan 23 00:08:10.217823 kubelet[2760]: I0123 00:08:10.217590 2760 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 00:08:10.217823 kubelet[2760]: I0123 00:08:10.217601 2760 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 00:08:10.217823 kubelet[2760]: I0123 00:08:10.217619 2760 policy_none.go:49] "None policy: Start" Jan 23 00:08:10.217823 kubelet[2760]: I0123 00:08:10.217627 2760 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 00:08:10.217823 kubelet[2760]: I0123 00:08:10.217635 2760 state_mem.go:35] "Initializing new in-memory state store" Jan 23 00:08:10.217823 kubelet[2760]: I0123 00:08:10.217727 2760 state_mem.go:75] "Updated machine memory state" Jan 23 00:08:10.222748 kubelet[2760]: I0123 00:08:10.222067 2760 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 00:08:10.222748 kubelet[2760]: I0123 00:08:10.222350 2760 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 00:08:10.222748 kubelet[2760]: I0123 00:08:10.222366 2760 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 00:08:10.222748 kubelet[2760]: I0123 00:08:10.222655 2760 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 00:08:10.226023 kubelet[2760]: E0123 00:08:10.225985 2760 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 00:08:10.274631 kubelet[2760]: I0123 00:08:10.274562 2760 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-2-n-105ad3c88f" Jan 23 00:08:10.276679 kubelet[2760]: I0123 00:08:10.275517 2760 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-2-n-105ad3c88f" Jan 23 00:08:10.276679 kubelet[2760]: I0123 00:08:10.276135 2760 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-2-n-105ad3c88f" Jan 23 00:08:10.327921 kubelet[2760]: I0123 00:08:10.327272 2760 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-n-105ad3c88f" Jan 23 00:08:10.340420 kubelet[2760]: I0123 00:08:10.340304 2760 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459-2-2-n-105ad3c88f" Jan 23 00:08:10.340625 kubelet[2760]: I0123 00:08:10.340612 2760 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-2-n-105ad3c88f" Jan 23 00:08:10.348934 kubelet[2760]: I0123 00:08:10.348871 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e0cfe3c9b8d257530672feb004f7b876-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-2-n-105ad3c88f\" (UID: \"e0cfe3c9b8d257530672feb004f7b876\") " pod="kube-system/kube-apiserver-ci-4459-2-2-n-105ad3c88f" Jan 23 00:08:10.348934 kubelet[2760]: I0123 00:08:10.348915 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1f78856ae7c4bfed0e5cd748be12d1fa-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-2-n-105ad3c88f\" (UID: \"1f78856ae7c4bfed0e5cd748be12d1fa\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-n-105ad3c88f" Jan 23 00:08:10.348934 kubelet[2760]: I0123 00:08:10.348934 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1f78856ae7c4bfed0e5cd748be12d1fa-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-2-n-105ad3c88f\" (UID: \"1f78856ae7c4bfed0e5cd748be12d1fa\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-n-105ad3c88f" Jan 23 00:08:10.349150 kubelet[2760]: I0123 00:08:10.348951 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e579d9ece7327074ab7ec917159f9048-kubeconfig\") pod \"kube-scheduler-ci-4459-2-2-n-105ad3c88f\" (UID: \"e579d9ece7327074ab7ec917159f9048\") " pod="kube-system/kube-scheduler-ci-4459-2-2-n-105ad3c88f" Jan 23 00:08:10.349150 kubelet[2760]: I0123 00:08:10.348968 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e0cfe3c9b8d257530672feb004f7b876-ca-certs\") pod \"kube-apiserver-ci-4459-2-2-n-105ad3c88f\" (UID: \"e0cfe3c9b8d257530672feb004f7b876\") " pod="kube-system/kube-apiserver-ci-4459-2-2-n-105ad3c88f" Jan 23 00:08:10.349150 kubelet[2760]: I0123 00:08:10.348983 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e0cfe3c9b8d257530672feb004f7b876-k8s-certs\") pod \"kube-apiserver-ci-4459-2-2-n-105ad3c88f\" (UID: \"e0cfe3c9b8d257530672feb004f7b876\") " pod="kube-system/kube-apiserver-ci-4459-2-2-n-105ad3c88f" Jan 23 00:08:10.349150 kubelet[2760]: I0123 00:08:10.348997 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1f78856ae7c4bfed0e5cd748be12d1fa-ca-certs\") pod \"kube-controller-manager-ci-4459-2-2-n-105ad3c88f\" (UID: \"1f78856ae7c4bfed0e5cd748be12d1fa\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-n-105ad3c88f" Jan 23 00:08:10.349150 kubelet[2760]: I0123 00:08:10.349011 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1f78856ae7c4bfed0e5cd748be12d1fa-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-2-n-105ad3c88f\" (UID: \"1f78856ae7c4bfed0e5cd748be12d1fa\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-n-105ad3c88f" Jan 23 00:08:10.349346 kubelet[2760]: I0123 00:08:10.349025 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1f78856ae7c4bfed0e5cd748be12d1fa-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-2-n-105ad3c88f\" (UID: \"1f78856ae7c4bfed0e5cd748be12d1fa\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-n-105ad3c88f" Jan 23 00:08:10.425921 sudo[2794]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 23 00:08:10.426242 sudo[2794]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 23 00:08:10.763823 sudo[2794]: pam_unix(sudo:session): session closed for user root Jan 23 00:08:11.116317 kubelet[2760]: I0123 00:08:11.115936 2760 apiserver.go:52] "Watching apiserver" Jan 23 00:08:11.149517 kubelet[2760]: I0123 00:08:11.149289 2760 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 00:08:11.176787 kubelet[2760]: I0123 00:08:11.175485 2760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-2-2-n-105ad3c88f" podStartSLOduration=1.175468253 podStartE2EDuration="1.175468253s" podCreationTimestamp="2026-01-23 00:08:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:08:11.169672562 +0000 UTC m=+1.130042970" watchObservedRunningTime="2026-01-23 00:08:11.175468253 +0000 UTC m=+1.135838661" Jan 23 00:08:11.187139 kubelet[2760]: I0123 00:08:11.186973 2760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-2-2-n-105ad3c88f" podStartSLOduration=1.186955666 podStartE2EDuration="1.186955666s" podCreationTimestamp="2026-01-23 00:08:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:08:11.185875422 +0000 UTC m=+1.146245830" watchObservedRunningTime="2026-01-23 00:08:11.186955666 +0000 UTC m=+1.147326114" Jan 23 00:08:11.192025 kubelet[2760]: I0123 00:08:11.191998 2760 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-2-n-105ad3c88f" Jan 23 00:08:11.200869 kubelet[2760]: E0123 00:08:11.200815 2760 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-2-n-105ad3c88f\" already exists" pod="kube-system/kube-apiserver-ci-4459-2-2-n-105ad3c88f" Jan 23 00:08:11.217240 kubelet[2760]: I0123 00:08:11.216909 2760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-2-2-n-105ad3c88f" podStartSLOduration=1.216888915 podStartE2EDuration="1.216888915s" podCreationTimestamp="2026-01-23 00:08:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:08:11.204334578 +0000 UTC m=+1.164704986" watchObservedRunningTime="2026-01-23 00:08:11.216888915 +0000 UTC m=+1.177259323" Jan 23 00:08:12.860220 sudo[1838]: pam_unix(sudo:session): session closed for user root Jan 23 00:08:12.960274 sshd[1822]: Connection closed by 68.220.241.50 port 53712 Jan 23 00:08:12.962853 sshd-session[1819]: pam_unix(sshd:session): session closed for user core Jan 23 00:08:12.968464 systemd[1]: sshd@6-188.245.94.123:22-68.220.241.50:53712.service: Deactivated successfully. Jan 23 00:08:12.972887 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 00:08:12.973348 systemd[1]: session-7.scope: Consumed 8.136s CPU time, 261.6M memory peak. Jan 23 00:08:12.975694 systemd-logind[1529]: Session 7 logged out. Waiting for processes to exit. Jan 23 00:08:12.977650 systemd-logind[1529]: Removed session 7. Jan 23 00:08:14.114069 kubelet[2760]: I0123 00:08:14.112836 2760 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 00:08:14.115175 containerd[1545]: time="2026-01-23T00:08:14.115003480Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 00:08:14.115650 kubelet[2760]: I0123 00:08:14.115566 2760 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 00:08:15.045121 systemd[1]: Created slice kubepods-besteffort-podf36d10a8_3839_498c_862e_2d40eea8480f.slice - libcontainer container kubepods-besteffort-podf36d10a8_3839_498c_862e_2d40eea8480f.slice. Jan 23 00:08:15.082124 kubelet[2760]: I0123 00:08:15.080638 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjwls\" (UniqueName: \"kubernetes.io/projected/f36d10a8-3839-498c-862e-2d40eea8480f-kube-api-access-qjwls\") pod \"kube-proxy-rqmhs\" (UID: \"f36d10a8-3839-498c-862e-2d40eea8480f\") " pod="kube-system/kube-proxy-rqmhs" Jan 23 00:08:15.082124 kubelet[2760]: I0123 00:08:15.080696 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tdd6\" (UniqueName: \"kubernetes.io/projected/701d00e6-1a2f-4263-ab42-5f03ef7ab716-kube-api-access-4tdd6\") pod \"cilium-ttlf5\" (UID: \"701d00e6-1a2f-4263-ab42-5f03ef7ab716\") " pod="kube-system/cilium-ttlf5" Jan 23 00:08:15.082124 kubelet[2760]: I0123 00:08:15.080736 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f36d10a8-3839-498c-862e-2d40eea8480f-xtables-lock\") pod \"kube-proxy-rqmhs\" (UID: \"f36d10a8-3839-498c-862e-2d40eea8480f\") " pod="kube-system/kube-proxy-rqmhs" Jan 23 00:08:15.082124 kubelet[2760]: I0123 00:08:15.080965 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/701d00e6-1a2f-4263-ab42-5f03ef7ab716-host-proc-sys-net\") pod \"cilium-ttlf5\" (UID: \"701d00e6-1a2f-4263-ab42-5f03ef7ab716\") " pod="kube-system/cilium-ttlf5" Jan 23 00:08:15.082124 kubelet[2760]: I0123 00:08:15.081004 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/701d00e6-1a2f-4263-ab42-5f03ef7ab716-lib-modules\") pod \"cilium-ttlf5\" (UID: \"701d00e6-1a2f-4263-ab42-5f03ef7ab716\") " pod="kube-system/cilium-ttlf5" Jan 23 00:08:15.082359 kubelet[2760]: I0123 00:08:15.081035 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/701d00e6-1a2f-4263-ab42-5f03ef7ab716-bpf-maps\") pod \"cilium-ttlf5\" (UID: \"701d00e6-1a2f-4263-ab42-5f03ef7ab716\") " pod="kube-system/cilium-ttlf5" Jan 23 00:08:15.082359 kubelet[2760]: I0123 00:08:15.081061 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/701d00e6-1a2f-4263-ab42-5f03ef7ab716-cilium-run\") pod \"cilium-ttlf5\" (UID: \"701d00e6-1a2f-4263-ab42-5f03ef7ab716\") " pod="kube-system/cilium-ttlf5" Jan 23 00:08:15.082359 kubelet[2760]: I0123 00:08:15.081122 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/701d00e6-1a2f-4263-ab42-5f03ef7ab716-cni-path\") pod \"cilium-ttlf5\" (UID: \"701d00e6-1a2f-4263-ab42-5f03ef7ab716\") " pod="kube-system/cilium-ttlf5" Jan 23 00:08:15.082359 kubelet[2760]: I0123 00:08:15.081156 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f36d10a8-3839-498c-862e-2d40eea8480f-lib-modules\") pod \"kube-proxy-rqmhs\" (UID: \"f36d10a8-3839-498c-862e-2d40eea8480f\") " pod="kube-system/kube-proxy-rqmhs" Jan 23 00:08:15.082359 kubelet[2760]: I0123 00:08:15.081184 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/701d00e6-1a2f-4263-ab42-5f03ef7ab716-cilium-config-path\") pod \"cilium-ttlf5\" (UID: \"701d00e6-1a2f-4263-ab42-5f03ef7ab716\") " pod="kube-system/cilium-ttlf5" Jan 23 00:08:15.082359 kubelet[2760]: I0123 00:08:15.081209 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/701d00e6-1a2f-4263-ab42-5f03ef7ab716-host-proc-sys-kernel\") pod \"cilium-ttlf5\" (UID: \"701d00e6-1a2f-4263-ab42-5f03ef7ab716\") " pod="kube-system/cilium-ttlf5" Jan 23 00:08:15.082533 kubelet[2760]: I0123 00:08:15.081235 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/701d00e6-1a2f-4263-ab42-5f03ef7ab716-hostproc\") pod \"cilium-ttlf5\" (UID: \"701d00e6-1a2f-4263-ab42-5f03ef7ab716\") " pod="kube-system/cilium-ttlf5" Jan 23 00:08:15.082533 kubelet[2760]: I0123 00:08:15.081260 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/701d00e6-1a2f-4263-ab42-5f03ef7ab716-xtables-lock\") pod \"cilium-ttlf5\" (UID: \"701d00e6-1a2f-4263-ab42-5f03ef7ab716\") " pod="kube-system/cilium-ttlf5" Jan 23 00:08:15.082533 kubelet[2760]: I0123 00:08:15.081288 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/701d00e6-1a2f-4263-ab42-5f03ef7ab716-clustermesh-secrets\") pod \"cilium-ttlf5\" (UID: \"701d00e6-1a2f-4263-ab42-5f03ef7ab716\") " pod="kube-system/cilium-ttlf5" Jan 23 00:08:15.082533 kubelet[2760]: I0123 00:08:15.081313 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/701d00e6-1a2f-4263-ab42-5f03ef7ab716-etc-cni-netd\") pod \"cilium-ttlf5\" (UID: \"701d00e6-1a2f-4263-ab42-5f03ef7ab716\") " pod="kube-system/cilium-ttlf5" Jan 23 00:08:15.082533 kubelet[2760]: I0123 00:08:15.081340 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/701d00e6-1a2f-4263-ab42-5f03ef7ab716-hubble-tls\") pod \"cilium-ttlf5\" (UID: \"701d00e6-1a2f-4263-ab42-5f03ef7ab716\") " pod="kube-system/cilium-ttlf5" Jan 23 00:08:15.082533 kubelet[2760]: I0123 00:08:15.081366 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f36d10a8-3839-498c-862e-2d40eea8480f-kube-proxy\") pod \"kube-proxy-rqmhs\" (UID: \"f36d10a8-3839-498c-862e-2d40eea8480f\") " pod="kube-system/kube-proxy-rqmhs" Jan 23 00:08:15.082673 kubelet[2760]: I0123 00:08:15.081391 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/701d00e6-1a2f-4263-ab42-5f03ef7ab716-cilium-cgroup\") pod \"cilium-ttlf5\" (UID: \"701d00e6-1a2f-4263-ab42-5f03ef7ab716\") " pod="kube-system/cilium-ttlf5" Jan 23 00:08:15.090152 systemd[1]: Created slice kubepods-burstable-pod701d00e6_1a2f_4263_ab42_5f03ef7ab716.slice - libcontainer container kubepods-burstable-pod701d00e6_1a2f_4263_ab42_5f03ef7ab716.slice. Jan 23 00:08:15.102409 kubelet[2760]: W0123 00:08:15.102360 2760 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4459-2-2-n-105ad3c88f" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4459-2-2-n-105ad3c88f' and this object Jan 23 00:08:15.102555 kubelet[2760]: E0123 00:08:15.102415 2760 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-4459-2-2-n-105ad3c88f\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459-2-2-n-105ad3c88f' and this object" logger="UnhandledError" Jan 23 00:08:15.102555 kubelet[2760]: W0123 00:08:15.102464 2760 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4459-2-2-n-105ad3c88f" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4459-2-2-n-105ad3c88f' and this object Jan 23 00:08:15.102555 kubelet[2760]: E0123 00:08:15.102493 2760 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4459-2-2-n-105ad3c88f\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459-2-2-n-105ad3c88f' and this object" logger="UnhandledError" Jan 23 00:08:15.102555 kubelet[2760]: W0123 00:08:15.102534 2760 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4459-2-2-n-105ad3c88f" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4459-2-2-n-105ad3c88f' and this object Jan 23 00:08:15.102663 kubelet[2760]: E0123 00:08:15.102544 2760 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-4459-2-2-n-105ad3c88f\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459-2-2-n-105ad3c88f' and this object" logger="UnhandledError" Jan 23 00:08:15.256562 systemd[1]: Created slice kubepods-besteffort-pod3f5ce586_f31a_4287_8fbe_8804465503f3.slice - libcontainer container kubepods-besteffort-pod3f5ce586_f31a_4287_8fbe_8804465503f3.slice. Jan 23 00:08:15.282516 kubelet[2760]: I0123 00:08:15.282397 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3f5ce586-f31a-4287-8fbe-8804465503f3-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-jmk8s\" (UID: \"3f5ce586-f31a-4287-8fbe-8804465503f3\") " pod="kube-system/cilium-operator-6c4d7847fc-jmk8s" Jan 23 00:08:15.283240 kubelet[2760]: I0123 00:08:15.283144 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xc6p\" (UniqueName: \"kubernetes.io/projected/3f5ce586-f31a-4287-8fbe-8804465503f3-kube-api-access-5xc6p\") pod \"cilium-operator-6c4d7847fc-jmk8s\" (UID: \"3f5ce586-f31a-4287-8fbe-8804465503f3\") " pod="kube-system/cilium-operator-6c4d7847fc-jmk8s" Jan 23 00:08:15.357743 containerd[1545]: time="2026-01-23T00:08:15.357372059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rqmhs,Uid:f36d10a8-3839-498c-862e-2d40eea8480f,Namespace:kube-system,Attempt:0,}" Jan 23 00:08:15.387498 containerd[1545]: time="2026-01-23T00:08:15.387407592Z" level=info msg="connecting to shim 5d0f8f8d60196a689c58bfc5f22f6697514475b6f155b6ecc7f9906846f8a21f" address="unix:///run/containerd/s/c0b68ba52863915e6505d1128bacf3eb26489c4ca3acd7cc6ecea7676088c459" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:08:15.422310 systemd[1]: Started cri-containerd-5d0f8f8d60196a689c58bfc5f22f6697514475b6f155b6ecc7f9906846f8a21f.scope - libcontainer container 5d0f8f8d60196a689c58bfc5f22f6697514475b6f155b6ecc7f9906846f8a21f. Jan 23 00:08:15.452093 containerd[1545]: time="2026-01-23T00:08:15.452043057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rqmhs,Uid:f36d10a8-3839-498c-862e-2d40eea8480f,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d0f8f8d60196a689c58bfc5f22f6697514475b6f155b6ecc7f9906846f8a21f\"" Jan 23 00:08:15.455752 containerd[1545]: time="2026-01-23T00:08:15.455701392Z" level=info msg="CreateContainer within sandbox \"5d0f8f8d60196a689c58bfc5f22f6697514475b6f155b6ecc7f9906846f8a21f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 00:08:15.465483 containerd[1545]: time="2026-01-23T00:08:15.465443711Z" level=info msg="Container 028b067e6648aff4502b2b85903fb1a36c737d9e1b495e5e2c87212926a1d0a8: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:08:15.474599 containerd[1545]: time="2026-01-23T00:08:15.474551626Z" level=info msg="CreateContainer within sandbox \"5d0f8f8d60196a689c58bfc5f22f6697514475b6f155b6ecc7f9906846f8a21f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"028b067e6648aff4502b2b85903fb1a36c737d9e1b495e5e2c87212926a1d0a8\"" Jan 23 00:08:15.475715 containerd[1545]: time="2026-01-23T00:08:15.475353002Z" level=info msg="StartContainer for \"028b067e6648aff4502b2b85903fb1a36c737d9e1b495e5e2c87212926a1d0a8\"" Jan 23 00:08:15.477549 containerd[1545]: time="2026-01-23T00:08:15.477459828Z" level=info msg="connecting to shim 028b067e6648aff4502b2b85903fb1a36c737d9e1b495e5e2c87212926a1d0a8" address="unix:///run/containerd/s/c0b68ba52863915e6505d1128bacf3eb26489c4ca3acd7cc6ecea7676088c459" protocol=ttrpc version=3 Jan 23 00:08:15.499323 systemd[1]: Started cri-containerd-028b067e6648aff4502b2b85903fb1a36c737d9e1b495e5e2c87212926a1d0a8.scope - libcontainer container 028b067e6648aff4502b2b85903fb1a36c737d9e1b495e5e2c87212926a1d0a8. Jan 23 00:08:15.587834 containerd[1545]: time="2026-01-23T00:08:15.587738395Z" level=info msg="StartContainer for \"028b067e6648aff4502b2b85903fb1a36c737d9e1b495e5e2c87212926a1d0a8\" returns successfully" Jan 23 00:08:16.182942 kubelet[2760]: E0123 00:08:16.182505 2760 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Jan 23 00:08:16.182942 kubelet[2760]: E0123 00:08:16.182496 2760 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jan 23 00:08:16.182942 kubelet[2760]: E0123 00:08:16.182605 2760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/701d00e6-1a2f-4263-ab42-5f03ef7ab716-clustermesh-secrets podName:701d00e6-1a2f-4263-ab42-5f03ef7ab716 nodeName:}" failed. No retries permitted until 2026-01-23 00:08:16.682581875 +0000 UTC m=+6.642952283 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/701d00e6-1a2f-4263-ab42-5f03ef7ab716-clustermesh-secrets") pod "cilium-ttlf5" (UID: "701d00e6-1a2f-4263-ab42-5f03ef7ab716") : failed to sync secret cache: timed out waiting for the condition Jan 23 00:08:16.182942 kubelet[2760]: E0123 00:08:16.182627 2760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/701d00e6-1a2f-4263-ab42-5f03ef7ab716-cilium-config-path podName:701d00e6-1a2f-4263-ab42-5f03ef7ab716 nodeName:}" failed. No retries permitted until 2026-01-23 00:08:16.682617238 +0000 UTC m=+6.642987646 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/701d00e6-1a2f-4263-ab42-5f03ef7ab716-cilium-config-path") pod "cilium-ttlf5" (UID: "701d00e6-1a2f-4263-ab42-5f03ef7ab716") : failed to sync configmap cache: timed out waiting for the condition Jan 23 00:08:16.220813 kubelet[2760]: I0123 00:08:16.220628 2760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rqmhs" podStartSLOduration=1.220602494 podStartE2EDuration="1.220602494s" podCreationTimestamp="2026-01-23 00:08:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:08:16.220362918 +0000 UTC m=+6.180733406" watchObservedRunningTime="2026-01-23 00:08:16.220602494 +0000 UTC m=+6.180972942" Jan 23 00:08:16.384780 kubelet[2760]: E0123 00:08:16.384628 2760 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jan 23 00:08:16.384780 kubelet[2760]: E0123 00:08:16.384761 2760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3f5ce586-f31a-4287-8fbe-8804465503f3-cilium-config-path podName:3f5ce586-f31a-4287-8fbe-8804465503f3 nodeName:}" failed. No retries permitted until 2026-01-23 00:08:16.884733587 +0000 UTC m=+6.845103995 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/3f5ce586-f31a-4287-8fbe-8804465503f3-cilium-config-path") pod "cilium-operator-6c4d7847fc-jmk8s" (UID: "3f5ce586-f31a-4287-8fbe-8804465503f3") : failed to sync configmap cache: timed out waiting for the condition Jan 23 00:08:16.897805 containerd[1545]: time="2026-01-23T00:08:16.897750146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ttlf5,Uid:701d00e6-1a2f-4263-ab42-5f03ef7ab716,Namespace:kube-system,Attempt:0,}" Jan 23 00:08:16.920643 containerd[1545]: time="2026-01-23T00:08:16.920261833Z" level=info msg="connecting to shim 1bbf383ef46e230cb52a351acd38c2c8a4af2a0d829d4bbf32a77c6900a98606" address="unix:///run/containerd/s/68e458c59253676ffbda15f8584bb931db182cc024c2632d492da525c4279f57" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:08:16.945308 systemd[1]: Started cri-containerd-1bbf383ef46e230cb52a351acd38c2c8a4af2a0d829d4bbf32a77c6900a98606.scope - libcontainer container 1bbf383ef46e230cb52a351acd38c2c8a4af2a0d829d4bbf32a77c6900a98606. Jan 23 00:08:16.972030 containerd[1545]: time="2026-01-23T00:08:16.971970380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ttlf5,Uid:701d00e6-1a2f-4263-ab42-5f03ef7ab716,Namespace:kube-system,Attempt:0,} returns sandbox id \"1bbf383ef46e230cb52a351acd38c2c8a4af2a0d829d4bbf32a77c6900a98606\"" Jan 23 00:08:16.974398 containerd[1545]: time="2026-01-23T00:08:16.974359902Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 23 00:08:17.062136 containerd[1545]: time="2026-01-23T00:08:17.062024298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-jmk8s,Uid:3f5ce586-f31a-4287-8fbe-8804465503f3,Namespace:kube-system,Attempt:0,}" Jan 23 00:08:17.086720 containerd[1545]: time="2026-01-23T00:08:17.086621042Z" level=info msg="connecting to shim 8116d929c61066cd6f907f891ca0f9a54ea6559c60dece2e54edefb3da08da1a" address="unix:///run/containerd/s/edb8e4028e56785cc7f24310582fc0c48bd6e4776ca79bb7635792e3dde4a5c3" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:08:17.115396 systemd[1]: Started cri-containerd-8116d929c61066cd6f907f891ca0f9a54ea6559c60dece2e54edefb3da08da1a.scope - libcontainer container 8116d929c61066cd6f907f891ca0f9a54ea6559c60dece2e54edefb3da08da1a. Jan 23 00:08:17.165762 containerd[1545]: time="2026-01-23T00:08:17.165686702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-jmk8s,Uid:3f5ce586-f31a-4287-8fbe-8804465503f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"8116d929c61066cd6f907f891ca0f9a54ea6559c60dece2e54edefb3da08da1a\"" Jan 23 00:08:20.598063 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1251967026.mount: Deactivated successfully. Jan 23 00:08:22.033298 containerd[1545]: time="2026-01-23T00:08:22.033232332Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:08:22.034429 containerd[1545]: time="2026-01-23T00:08:22.034390279Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 23 00:08:22.035121 containerd[1545]: time="2026-01-23T00:08:22.034948991Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:08:22.036763 containerd[1545]: time="2026-01-23T00:08:22.036641169Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.062241824s" Jan 23 00:08:22.036763 containerd[1545]: time="2026-01-23T00:08:22.036677171Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 23 00:08:22.039724 containerd[1545]: time="2026-01-23T00:08:22.039681904Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 23 00:08:22.040806 containerd[1545]: time="2026-01-23T00:08:22.040772087Z" level=info msg="CreateContainer within sandbox \"1bbf383ef46e230cb52a351acd38c2c8a4af2a0d829d4bbf32a77c6900a98606\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 00:08:22.053698 containerd[1545]: time="2026-01-23T00:08:22.052677815Z" level=info msg="Container d7a4ccc5053684a4f0fdfd37ba2746f369dc6c192fa5949b44ff348a036662a5: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:08:22.062964 containerd[1545]: time="2026-01-23T00:08:22.062917087Z" level=info msg="CreateContainer within sandbox \"1bbf383ef46e230cb52a351acd38c2c8a4af2a0d829d4bbf32a77c6900a98606\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d7a4ccc5053684a4f0fdfd37ba2746f369dc6c192fa5949b44ff348a036662a5\"" Jan 23 00:08:22.064406 containerd[1545]: time="2026-01-23T00:08:22.064379252Z" level=info msg="StartContainer for \"d7a4ccc5053684a4f0fdfd37ba2746f369dc6c192fa5949b44ff348a036662a5\"" Jan 23 00:08:22.065785 containerd[1545]: time="2026-01-23T00:08:22.065754811Z" level=info msg="connecting to shim d7a4ccc5053684a4f0fdfd37ba2746f369dc6c192fa5949b44ff348a036662a5" address="unix:///run/containerd/s/68e458c59253676ffbda15f8584bb931db182cc024c2632d492da525c4279f57" protocol=ttrpc version=3 Jan 23 00:08:22.096332 systemd[1]: Started cri-containerd-d7a4ccc5053684a4f0fdfd37ba2746f369dc6c192fa5949b44ff348a036662a5.scope - libcontainer container d7a4ccc5053684a4f0fdfd37ba2746f369dc6c192fa5949b44ff348a036662a5. Jan 23 00:08:22.132952 containerd[1545]: time="2026-01-23T00:08:22.132855009Z" level=info msg="StartContainer for \"d7a4ccc5053684a4f0fdfd37ba2746f369dc6c192fa5949b44ff348a036662a5\" returns successfully" Jan 23 00:08:22.150057 systemd[1]: cri-containerd-d7a4ccc5053684a4f0fdfd37ba2746f369dc6c192fa5949b44ff348a036662a5.scope: Deactivated successfully. Jan 23 00:08:22.160478 containerd[1545]: time="2026-01-23T00:08:22.160262273Z" level=info msg="received container exit event container_id:\"d7a4ccc5053684a4f0fdfd37ba2746f369dc6c192fa5949b44ff348a036662a5\" id:\"d7a4ccc5053684a4f0fdfd37ba2746f369dc6c192fa5949b44ff348a036662a5\" pid:3172 exited_at:{seconds:1769126902 nanos:159007920}" Jan 23 00:08:22.185838 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7a4ccc5053684a4f0fdfd37ba2746f369dc6c192fa5949b44ff348a036662a5-rootfs.mount: Deactivated successfully. Jan 23 00:08:23.239692 containerd[1545]: time="2026-01-23T00:08:23.239628733Z" level=info msg="CreateContainer within sandbox \"1bbf383ef46e230cb52a351acd38c2c8a4af2a0d829d4bbf32a77c6900a98606\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 00:08:23.266855 containerd[1545]: time="2026-01-23T00:08:23.266817063Z" level=info msg="Container 509811dd54cf9b3ba243ac64e08b6b8b93e317ede7c42c353ac583c74119d733: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:08:23.269397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2887353581.mount: Deactivated successfully. Jan 23 00:08:23.276161 containerd[1545]: time="2026-01-23T00:08:23.276098866Z" level=info msg="CreateContainer within sandbox \"1bbf383ef46e230cb52a351acd38c2c8a4af2a0d829d4bbf32a77c6900a98606\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"509811dd54cf9b3ba243ac64e08b6b8b93e317ede7c42c353ac583c74119d733\"" Jan 23 00:08:23.277279 containerd[1545]: time="2026-01-23T00:08:23.277214809Z" level=info msg="StartContainer for \"509811dd54cf9b3ba243ac64e08b6b8b93e317ede7c42c353ac583c74119d733\"" Jan 23 00:08:23.279845 containerd[1545]: time="2026-01-23T00:08:23.279784393Z" level=info msg="connecting to shim 509811dd54cf9b3ba243ac64e08b6b8b93e317ede7c42c353ac583c74119d733" address="unix:///run/containerd/s/68e458c59253676ffbda15f8584bb931db182cc024c2632d492da525c4279f57" protocol=ttrpc version=3 Jan 23 00:08:23.303325 systemd[1]: Started cri-containerd-509811dd54cf9b3ba243ac64e08b6b8b93e317ede7c42c353ac583c74119d733.scope - libcontainer container 509811dd54cf9b3ba243ac64e08b6b8b93e317ede7c42c353ac583c74119d733. Jan 23 00:08:23.339892 containerd[1545]: time="2026-01-23T00:08:23.339857775Z" level=info msg="StartContainer for \"509811dd54cf9b3ba243ac64e08b6b8b93e317ede7c42c353ac583c74119d733\" returns successfully" Jan 23 00:08:23.358738 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 00:08:23.358957 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 00:08:23.361198 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 23 00:08:23.363519 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 00:08:23.365032 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 00:08:23.367833 systemd[1]: cri-containerd-509811dd54cf9b3ba243ac64e08b6b8b93e317ede7c42c353ac583c74119d733.scope: Deactivated successfully. Jan 23 00:08:23.368732 containerd[1545]: time="2026-01-23T00:08:23.368695759Z" level=info msg="received container exit event container_id:\"509811dd54cf9b3ba243ac64e08b6b8b93e317ede7c42c353ac583c74119d733\" id:\"509811dd54cf9b3ba243ac64e08b6b8b93e317ede7c42c353ac583c74119d733\" pid:3215 exited_at:{seconds:1769126903 nanos:367442568}" Jan 23 00:08:23.396584 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 00:08:24.006796 containerd[1545]: time="2026-01-23T00:08:24.006734629Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:08:24.007897 containerd[1545]: time="2026-01-23T00:08:24.007856010Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 23 00:08:24.008782 containerd[1545]: time="2026-01-23T00:08:24.008476084Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:08:24.010777 containerd[1545]: time="2026-01-23T00:08:24.010656404Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.970942137s" Jan 23 00:08:24.010777 containerd[1545]: time="2026-01-23T00:08:24.010693166Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 23 00:08:24.014728 containerd[1545]: time="2026-01-23T00:08:24.014703386Z" level=info msg="CreateContainer within sandbox \"8116d929c61066cd6f907f891ca0f9a54ea6559c60dece2e54edefb3da08da1a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 23 00:08:24.024120 containerd[1545]: time="2026-01-23T00:08:24.023182171Z" level=info msg="Container 2dcb66d5818d277444299f76b6f80912511f46bbbbacadcbf39469c55dc503dc: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:08:24.037071 containerd[1545]: time="2026-01-23T00:08:24.036401416Z" level=info msg="CreateContainer within sandbox \"8116d929c61066cd6f907f891ca0f9a54ea6559c60dece2e54edefb3da08da1a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2dcb66d5818d277444299f76b6f80912511f46bbbbacadcbf39469c55dc503dc\"" Jan 23 00:08:24.038403 containerd[1545]: time="2026-01-23T00:08:24.038341602Z" level=info msg="StartContainer for \"2dcb66d5818d277444299f76b6f80912511f46bbbbacadcbf39469c55dc503dc\"" Jan 23 00:08:24.042830 containerd[1545]: time="2026-01-23T00:08:24.042787046Z" level=info msg="connecting to shim 2dcb66d5818d277444299f76b6f80912511f46bbbbacadcbf39469c55dc503dc" address="unix:///run/containerd/s/edb8e4028e56785cc7f24310582fc0c48bd6e4776ca79bb7635792e3dde4a5c3" protocol=ttrpc version=3 Jan 23 00:08:24.069437 systemd[1]: Started cri-containerd-2dcb66d5818d277444299f76b6f80912511f46bbbbacadcbf39469c55dc503dc.scope - libcontainer container 2dcb66d5818d277444299f76b6f80912511f46bbbbacadcbf39469c55dc503dc. Jan 23 00:08:24.101746 containerd[1545]: time="2026-01-23T00:08:24.101615633Z" level=info msg="StartContainer for \"2dcb66d5818d277444299f76b6f80912511f46bbbbacadcbf39469c55dc503dc\" returns successfully" Jan 23 00:08:24.245009 containerd[1545]: time="2026-01-23T00:08:24.244907572Z" level=info msg="CreateContainer within sandbox \"1bbf383ef46e230cb52a351acd38c2c8a4af2a0d829d4bbf32a77c6900a98606\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 00:08:24.265381 containerd[1545]: time="2026-01-23T00:08:24.264200710Z" level=info msg="Container a296809a47dd54e86e7c24677c4a8fdc315c4e392689fe74f4384d135a838fd1: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:08:24.269434 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-509811dd54cf9b3ba243ac64e08b6b8b93e317ede7c42c353ac583c74119d733-rootfs.mount: Deactivated successfully. Jan 23 00:08:24.280309 containerd[1545]: time="2026-01-23T00:08:24.280254110Z" level=info msg="CreateContainer within sandbox \"1bbf383ef46e230cb52a351acd38c2c8a4af2a0d829d4bbf32a77c6900a98606\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a296809a47dd54e86e7c24677c4a8fdc315c4e392689fe74f4384d135a838fd1\"" Jan 23 00:08:24.280945 containerd[1545]: time="2026-01-23T00:08:24.280830502Z" level=info msg="StartContainer for \"a296809a47dd54e86e7c24677c4a8fdc315c4e392689fe74f4384d135a838fd1\"" Jan 23 00:08:24.282853 containerd[1545]: time="2026-01-23T00:08:24.282825211Z" level=info msg="connecting to shim a296809a47dd54e86e7c24677c4a8fdc315c4e392689fe74f4384d135a838fd1" address="unix:///run/containerd/s/68e458c59253676ffbda15f8584bb931db182cc024c2632d492da525c4279f57" protocol=ttrpc version=3 Jan 23 00:08:24.322291 systemd[1]: Started cri-containerd-a296809a47dd54e86e7c24677c4a8fdc315c4e392689fe74f4384d135a838fd1.scope - libcontainer container a296809a47dd54e86e7c24677c4a8fdc315c4e392689fe74f4384d135a838fd1. Jan 23 00:08:24.331672 kubelet[2760]: I0123 00:08:24.331433 2760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-jmk8s" podStartSLOduration=2.4856731930000002 podStartE2EDuration="9.331412916s" podCreationTimestamp="2026-01-23 00:08:15 +0000 UTC" firstStartedPulling="2026-01-23 00:08:17.167610029 +0000 UTC m=+7.127980437" lastFinishedPulling="2026-01-23 00:08:24.013349752 +0000 UTC m=+13.973720160" observedRunningTime="2026-01-23 00:08:24.274515035 +0000 UTC m=+14.234885403" watchObservedRunningTime="2026-01-23 00:08:24.331412916 +0000 UTC m=+14.291783324" Jan 23 00:08:24.428412 containerd[1545]: time="2026-01-23T00:08:24.428369554Z" level=info msg="StartContainer for \"a296809a47dd54e86e7c24677c4a8fdc315c4e392689fe74f4384d135a838fd1\" returns successfully" Jan 23 00:08:24.445010 systemd[1]: cri-containerd-a296809a47dd54e86e7c24677c4a8fdc315c4e392689fe74f4384d135a838fd1.scope: Deactivated successfully. Jan 23 00:08:24.448266 containerd[1545]: time="2026-01-23T00:08:24.448186721Z" level=info msg="received container exit event container_id:\"a296809a47dd54e86e7c24677c4a8fdc315c4e392689fe74f4384d135a838fd1\" id:\"a296809a47dd54e86e7c24677c4a8fdc315c4e392689fe74f4384d135a838fd1\" pid:3308 exited_at:{seconds:1769126904 nanos:447887544}" Jan 23 00:08:24.485572 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a296809a47dd54e86e7c24677c4a8fdc315c4e392689fe74f4384d135a838fd1-rootfs.mount: Deactivated successfully. Jan 23 00:08:25.252063 containerd[1545]: time="2026-01-23T00:08:25.251874888Z" level=info msg="CreateContainer within sandbox \"1bbf383ef46e230cb52a351acd38c2c8a4af2a0d829d4bbf32a77c6900a98606\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 00:08:25.265368 containerd[1545]: time="2026-01-23T00:08:25.265317767Z" level=info msg="Container 59ce2086b9178abfd6975bc4edcc8b58be672525536435e6e376fcd517bc468b: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:08:25.279106 containerd[1545]: time="2026-01-23T00:08:25.278483190Z" level=info msg="CreateContainer within sandbox \"1bbf383ef46e230cb52a351acd38c2c8a4af2a0d829d4bbf32a77c6900a98606\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"59ce2086b9178abfd6975bc4edcc8b58be672525536435e6e376fcd517bc468b\"" Jan 23 00:08:25.281091 containerd[1545]: time="2026-01-23T00:08:25.280962203Z" level=info msg="StartContainer for \"59ce2086b9178abfd6975bc4edcc8b58be672525536435e6e376fcd517bc468b\"" Jan 23 00:08:25.281883 containerd[1545]: time="2026-01-23T00:08:25.281804288Z" level=info msg="connecting to shim 59ce2086b9178abfd6975bc4edcc8b58be672525536435e6e376fcd517bc468b" address="unix:///run/containerd/s/68e458c59253676ffbda15f8584bb931db182cc024c2632d492da525c4279f57" protocol=ttrpc version=3 Jan 23 00:08:25.306299 systemd[1]: Started cri-containerd-59ce2086b9178abfd6975bc4edcc8b58be672525536435e6e376fcd517bc468b.scope - libcontainer container 59ce2086b9178abfd6975bc4edcc8b58be672525536435e6e376fcd517bc468b. Jan 23 00:08:25.334242 systemd[1]: cri-containerd-59ce2086b9178abfd6975bc4edcc8b58be672525536435e6e376fcd517bc468b.scope: Deactivated successfully. Jan 23 00:08:25.338186 containerd[1545]: time="2026-01-23T00:08:25.338137818Z" level=info msg="received container exit event container_id:\"59ce2086b9178abfd6975bc4edcc8b58be672525536435e6e376fcd517bc468b\" id:\"59ce2086b9178abfd6975bc4edcc8b58be672525536435e6e376fcd517bc468b\" pid:3352 exited_at:{seconds:1769126905 nanos:337928087}" Jan 23 00:08:25.338824 containerd[1545]: time="2026-01-23T00:08:25.338800614Z" level=info msg="StartContainer for \"59ce2086b9178abfd6975bc4edcc8b58be672525536435e6e376fcd517bc468b\" returns successfully" Jan 23 00:08:25.360402 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59ce2086b9178abfd6975bc4edcc8b58be672525536435e6e376fcd517bc468b-rootfs.mount: Deactivated successfully. Jan 23 00:08:26.261365 containerd[1545]: time="2026-01-23T00:08:26.261303521Z" level=info msg="CreateContainer within sandbox \"1bbf383ef46e230cb52a351acd38c2c8a4af2a0d829d4bbf32a77c6900a98606\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 00:08:26.279110 containerd[1545]: time="2026-01-23T00:08:26.277577249Z" level=info msg="Container 17b35f5f89e99d752eb336f60cb9421f3bd52f7f4cc6048851b3dca359903d5a: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:08:26.290844 containerd[1545]: time="2026-01-23T00:08:26.290652810Z" level=info msg="CreateContainer within sandbox \"1bbf383ef46e230cb52a351acd38c2c8a4af2a0d829d4bbf32a77c6900a98606\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"17b35f5f89e99d752eb336f60cb9421f3bd52f7f4cc6048851b3dca359903d5a\"" Jan 23 00:08:26.292538 containerd[1545]: time="2026-01-23T00:08:26.292487546Z" level=info msg="StartContainer for \"17b35f5f89e99d752eb336f60cb9421f3bd52f7f4cc6048851b3dca359903d5a\"" Jan 23 00:08:26.293771 containerd[1545]: time="2026-01-23T00:08:26.293741211Z" level=info msg="connecting to shim 17b35f5f89e99d752eb336f60cb9421f3bd52f7f4cc6048851b3dca359903d5a" address="unix:///run/containerd/s/68e458c59253676ffbda15f8584bb931db182cc024c2632d492da525c4279f57" protocol=ttrpc version=3 Jan 23 00:08:26.318355 systemd[1]: Started cri-containerd-17b35f5f89e99d752eb336f60cb9421f3bd52f7f4cc6048851b3dca359903d5a.scope - libcontainer container 17b35f5f89e99d752eb336f60cb9421f3bd52f7f4cc6048851b3dca359903d5a. Jan 23 00:08:26.375464 containerd[1545]: time="2026-01-23T00:08:26.375376303Z" level=info msg="StartContainer for \"17b35f5f89e99d752eb336f60cb9421f3bd52f7f4cc6048851b3dca359903d5a\" returns successfully" Jan 23 00:08:26.458826 kubelet[2760]: I0123 00:08:26.458740 2760 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 00:08:26.497705 systemd[1]: Created slice kubepods-burstable-pod284e13f5_421f_493a_927c_d3a3f64bf631.slice - libcontainer container kubepods-burstable-pod284e13f5_421f_493a_927c_d3a3f64bf631.slice. Jan 23 00:08:26.505479 systemd[1]: Created slice kubepods-burstable-pod51a5701f_3875_4ced_babf_d431662e1261.slice - libcontainer container kubepods-burstable-pod51a5701f_3875_4ced_babf_d431662e1261.slice. Jan 23 00:08:26.557782 kubelet[2760]: I0123 00:08:26.557648 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j5t7\" (UniqueName: \"kubernetes.io/projected/51a5701f-3875-4ced-babf-d431662e1261-kube-api-access-6j5t7\") pod \"coredns-668d6bf9bc-84q6v\" (UID: \"51a5701f-3875-4ced-babf-d431662e1261\") " pod="kube-system/coredns-668d6bf9bc-84q6v" Jan 23 00:08:26.557782 kubelet[2760]: I0123 00:08:26.557714 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/284e13f5-421f-493a-927c-d3a3f64bf631-config-volume\") pod \"coredns-668d6bf9bc-nlc5b\" (UID: \"284e13f5-421f-493a-927c-d3a3f64bf631\") " pod="kube-system/coredns-668d6bf9bc-nlc5b" Jan 23 00:08:26.557782 kubelet[2760]: I0123 00:08:26.557743 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzcfg\" (UniqueName: \"kubernetes.io/projected/284e13f5-421f-493a-927c-d3a3f64bf631-kube-api-access-rzcfg\") pod \"coredns-668d6bf9bc-nlc5b\" (UID: \"284e13f5-421f-493a-927c-d3a3f64bf631\") " pod="kube-system/coredns-668d6bf9bc-nlc5b" Jan 23 00:08:26.557782 kubelet[2760]: I0123 00:08:26.557767 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51a5701f-3875-4ced-babf-d431662e1261-config-volume\") pod \"coredns-668d6bf9bc-84q6v\" (UID: \"51a5701f-3875-4ced-babf-d431662e1261\") " pod="kube-system/coredns-668d6bf9bc-84q6v" Jan 23 00:08:26.804153 containerd[1545]: time="2026-01-23T00:08:26.804020508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nlc5b,Uid:284e13f5-421f-493a-927c-d3a3f64bf631,Namespace:kube-system,Attempt:0,}" Jan 23 00:08:26.811935 containerd[1545]: time="2026-01-23T00:08:26.811837115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-84q6v,Uid:51a5701f-3875-4ced-babf-d431662e1261,Namespace:kube-system,Attempt:0,}" Jan 23 00:08:28.513979 systemd-networkd[1418]: cilium_host: Link UP Jan 23 00:08:28.514554 systemd-networkd[1418]: cilium_net: Link UP Jan 23 00:08:28.514863 systemd-networkd[1418]: cilium_net: Gained carrier Jan 23 00:08:28.519017 systemd-networkd[1418]: cilium_host: Gained carrier Jan 23 00:08:28.631341 systemd-networkd[1418]: cilium_vxlan: Link UP Jan 23 00:08:28.631353 systemd-networkd[1418]: cilium_vxlan: Gained carrier Jan 23 00:08:28.857701 systemd-networkd[1418]: cilium_net: Gained IPv6LL Jan 23 00:08:28.901277 kernel: NET: Registered PF_ALG protocol family Jan 23 00:08:29.474311 systemd-networkd[1418]: cilium_host: Gained IPv6LL Jan 23 00:08:29.577006 systemd-networkd[1418]: lxc_health: Link UP Jan 23 00:08:29.587450 systemd-networkd[1418]: lxc_health: Gained carrier Jan 23 00:08:29.858318 systemd-networkd[1418]: lxcc41ae414a065: Link UP Jan 23 00:08:29.867572 systemd-networkd[1418]: lxc86f9945ddfdc: Link UP Jan 23 00:08:29.874099 kernel: eth0: renamed from tmpec4ef Jan 23 00:08:29.875112 kernel: eth0: renamed from tmpb36d0 Jan 23 00:08:29.875232 systemd-networkd[1418]: lxcc41ae414a065: Gained carrier Jan 23 00:08:29.878943 systemd-networkd[1418]: lxc86f9945ddfdc: Gained carrier Jan 23 00:08:30.564236 systemd-networkd[1418]: cilium_vxlan: Gained IPv6LL Jan 23 00:08:30.922140 kubelet[2760]: I0123 00:08:30.920705 2760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ttlf5" podStartSLOduration=10.856424049 podStartE2EDuration="15.9206892s" podCreationTimestamp="2026-01-23 00:08:15 +0000 UTC" firstStartedPulling="2026-01-23 00:08:16.973350114 +0000 UTC m=+6.933720522" lastFinishedPulling="2026-01-23 00:08:22.037615265 +0000 UTC m=+11.997985673" observedRunningTime="2026-01-23 00:08:27.297073519 +0000 UTC m=+17.257443927" watchObservedRunningTime="2026-01-23 00:08:30.9206892 +0000 UTC m=+20.881059608" Jan 23 00:08:31.075698 systemd-networkd[1418]: lxc86f9945ddfdc: Gained IPv6LL Jan 23 00:08:31.329420 systemd-networkd[1418]: lxc_health: Gained IPv6LL Jan 23 00:08:31.842156 systemd-networkd[1418]: lxcc41ae414a065: Gained IPv6LL Jan 23 00:08:33.666754 containerd[1545]: time="2026-01-23T00:08:33.666225032Z" level=info msg="connecting to shim b36d004a05847838c6fee710d4387858473a98d7219695f2fb223122254b50ed" address="unix:///run/containerd/s/b2a976307db8e6bc988d888fe262ed5676686c569cf09b1c5e97946aae6e453c" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:08:33.689289 containerd[1545]: time="2026-01-23T00:08:33.689241958Z" level=info msg="connecting to shim ec4efbe22e681bc4e2a2a9df23db7ecdfa463c9057578adec6e674e650864bc1" address="unix:///run/containerd/s/0561573acb2ff748ca6c7a1cbb43b0f740110a4c238bf9b74528b1addcf34b21" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:08:33.709283 systemd[1]: Started cri-containerd-b36d004a05847838c6fee710d4387858473a98d7219695f2fb223122254b50ed.scope - libcontainer container b36d004a05847838c6fee710d4387858473a98d7219695f2fb223122254b50ed. Jan 23 00:08:33.722286 systemd[1]: Started cri-containerd-ec4efbe22e681bc4e2a2a9df23db7ecdfa463c9057578adec6e674e650864bc1.scope - libcontainer container ec4efbe22e681bc4e2a2a9df23db7ecdfa463c9057578adec6e674e650864bc1. Jan 23 00:08:33.770442 containerd[1545]: time="2026-01-23T00:08:33.770396863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-84q6v,Uid:51a5701f-3875-4ced-babf-d431662e1261,Namespace:kube-system,Attempt:0,} returns sandbox id \"b36d004a05847838c6fee710d4387858473a98d7219695f2fb223122254b50ed\"" Jan 23 00:08:33.776935 containerd[1545]: time="2026-01-23T00:08:33.776828824Z" level=info msg="CreateContainer within sandbox \"b36d004a05847838c6fee710d4387858473a98d7219695f2fb223122254b50ed\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 00:08:33.795689 containerd[1545]: time="2026-01-23T00:08:33.795452638Z" level=info msg="Container 188562637e5bbf2e7991f9a24f65fd84d92fffc4b6f93294b81fc8faeb4a0a37: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:08:33.799613 containerd[1545]: time="2026-01-23T00:08:33.799462493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nlc5b,Uid:284e13f5-421f-493a-927c-d3a3f64bf631,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec4efbe22e681bc4e2a2a9df23db7ecdfa463c9057578adec6e674e650864bc1\"" Jan 23 00:08:33.804033 containerd[1545]: time="2026-01-23T00:08:33.803949249Z" level=info msg="CreateContainer within sandbox \"ec4efbe22e681bc4e2a2a9df23db7ecdfa463c9057578adec6e674e650864bc1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 00:08:33.806603 containerd[1545]: time="2026-01-23T00:08:33.806554483Z" level=info msg="CreateContainer within sandbox \"b36d004a05847838c6fee710d4387858473a98d7219695f2fb223122254b50ed\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"188562637e5bbf2e7991f9a24f65fd84d92fffc4b6f93294b81fc8faeb4a0a37\"" Jan 23 00:08:33.812401 containerd[1545]: time="2026-01-23T00:08:33.812368217Z" level=info msg="Container 030e6824cf6043b3f0795b66339a831eca00f87161e9b9e2fe75f279acb96e21: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:08:33.825110 containerd[1545]: time="2026-01-23T00:08:33.824539589Z" level=info msg="StartContainer for \"188562637e5bbf2e7991f9a24f65fd84d92fffc4b6f93294b81fc8faeb4a0a37\"" Jan 23 00:08:33.825992 containerd[1545]: time="2026-01-23T00:08:33.825937490Z" level=info msg="connecting to shim 188562637e5bbf2e7991f9a24f65fd84d92fffc4b6f93294b81fc8faeb4a0a37" address="unix:///run/containerd/s/b2a976307db8e6bc988d888fe262ed5676686c569cf09b1c5e97946aae6e453c" protocol=ttrpc version=3 Jan 23 00:08:33.834352 containerd[1545]: time="2026-01-23T00:08:33.834230972Z" level=info msg="CreateContainer within sandbox \"ec4efbe22e681bc4e2a2a9df23db7ecdfa463c9057578adec6e674e650864bc1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"030e6824cf6043b3f0795b66339a831eca00f87161e9b9e2fe75f279acb96e21\"" Jan 23 00:08:33.835295 containerd[1545]: time="2026-01-23T00:08:33.834943523Z" level=info msg="StartContainer for \"030e6824cf6043b3f0795b66339a831eca00f87161e9b9e2fe75f279acb96e21\"" Jan 23 00:08:33.836167 containerd[1545]: time="2026-01-23T00:08:33.836138935Z" level=info msg="connecting to shim 030e6824cf6043b3f0795b66339a831eca00f87161e9b9e2fe75f279acb96e21" address="unix:///run/containerd/s/0561573acb2ff748ca6c7a1cbb43b0f740110a4c238bf9b74528b1addcf34b21" protocol=ttrpc version=3 Jan 23 00:08:33.853268 systemd[1]: Started cri-containerd-188562637e5bbf2e7991f9a24f65fd84d92fffc4b6f93294b81fc8faeb4a0a37.scope - libcontainer container 188562637e5bbf2e7991f9a24f65fd84d92fffc4b6f93294b81fc8faeb4a0a37. Jan 23 00:08:33.861297 systemd[1]: Started cri-containerd-030e6824cf6043b3f0795b66339a831eca00f87161e9b9e2fe75f279acb96e21.scope - libcontainer container 030e6824cf6043b3f0795b66339a831eca00f87161e9b9e2fe75f279acb96e21. Jan 23 00:08:33.909187 containerd[1545]: time="2026-01-23T00:08:33.908307848Z" level=info msg="StartContainer for \"188562637e5bbf2e7991f9a24f65fd84d92fffc4b6f93294b81fc8faeb4a0a37\" returns successfully" Jan 23 00:08:33.915339 containerd[1545]: time="2026-01-23T00:08:33.915307514Z" level=info msg="StartContainer for \"030e6824cf6043b3f0795b66339a831eca00f87161e9b9e2fe75f279acb96e21\" returns successfully" Jan 23 00:08:34.316846 kubelet[2760]: I0123 00:08:34.316766 2760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-84q6v" podStartSLOduration=19.316747118 podStartE2EDuration="19.316747118s" podCreationTimestamp="2026-01-23 00:08:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:08:34.312517578 +0000 UTC m=+24.272888066" watchObservedRunningTime="2026-01-23 00:08:34.316747118 +0000 UTC m=+24.277117526" Jan 23 00:08:34.332099 kubelet[2760]: I0123 00:08:34.332019 2760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-nlc5b" podStartSLOduration=19.331998889 podStartE2EDuration="19.331998889s" podCreationTimestamp="2026-01-23 00:08:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:08:34.331549869 +0000 UTC m=+24.291920277" watchObservedRunningTime="2026-01-23 00:08:34.331998889 +0000 UTC m=+24.292369377" Jan 23 00:08:34.652164 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2792487068.mount: Deactivated successfully. Jan 23 00:08:43.178772 kubelet[2760]: I0123 00:08:43.178415 2760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 00:10:29.301274 systemd[1]: Started sshd@7-188.245.94.123:22-68.220.241.50:37482.service - OpenSSH per-connection server daemon (68.220.241.50:37482). Jan 23 00:10:29.939144 sshd[4081]: Accepted publickey for core from 68.220.241.50 port 37482 ssh2: RSA SHA256:wScRSXm5JHKrAeSxAplDhSGBmu9+62e7CgH0oSNisYE Jan 23 00:10:29.940685 sshd-session[4081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:10:29.946804 systemd-logind[1529]: New session 8 of user core. Jan 23 00:10:29.951405 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 00:10:30.479851 sshd[4084]: Connection closed by 68.220.241.50 port 37482 Jan 23 00:10:30.481562 sshd-session[4081]: pam_unix(sshd:session): session closed for user core Jan 23 00:10:30.486574 systemd[1]: sshd@7-188.245.94.123:22-68.220.241.50:37482.service: Deactivated successfully. Jan 23 00:10:30.489748 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 00:10:30.491586 systemd-logind[1529]: Session 8 logged out. Waiting for processes to exit. Jan 23 00:10:30.493593 systemd-logind[1529]: Removed session 8. Jan 23 00:10:35.593200 systemd[1]: Started sshd@8-188.245.94.123:22-68.220.241.50:44684.service - OpenSSH per-connection server daemon (68.220.241.50:44684). Jan 23 00:10:36.214688 sshd[4098]: Accepted publickey for core from 68.220.241.50 port 44684 ssh2: RSA SHA256:wScRSXm5JHKrAeSxAplDhSGBmu9+62e7CgH0oSNisYE Jan 23 00:10:36.217125 sshd-session[4098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:10:36.222744 systemd-logind[1529]: New session 9 of user core. Jan 23 00:10:36.237331 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 00:10:36.718121 sshd[4101]: Connection closed by 68.220.241.50 port 44684 Jan 23 00:10:36.718960 sshd-session[4098]: pam_unix(sshd:session): session closed for user core Jan 23 00:10:36.724996 systemd-logind[1529]: Session 9 logged out. Waiting for processes to exit. Jan 23 00:10:36.725847 systemd[1]: sshd@8-188.245.94.123:22-68.220.241.50:44684.service: Deactivated successfully. Jan 23 00:10:36.729386 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 00:10:36.732181 systemd-logind[1529]: Removed session 9. Jan 23 00:10:41.832071 systemd[1]: Started sshd@9-188.245.94.123:22-68.220.241.50:44688.service - OpenSSH per-connection server daemon (68.220.241.50:44688). Jan 23 00:10:42.472611 sshd[4114]: Accepted publickey for core from 68.220.241.50 port 44688 ssh2: RSA SHA256:wScRSXm5JHKrAeSxAplDhSGBmu9+62e7CgH0oSNisYE Jan 23 00:10:42.475370 sshd-session[4114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:10:42.483041 systemd-logind[1529]: New session 10 of user core. Jan 23 00:10:42.487341 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 00:10:42.984101 sshd[4117]: Connection closed by 68.220.241.50 port 44688 Jan 23 00:10:42.985001 sshd-session[4114]: pam_unix(sshd:session): session closed for user core Jan 23 00:10:42.990968 systemd-logind[1529]: Session 10 logged out. Waiting for processes to exit. Jan 23 00:10:42.991864 systemd[1]: sshd@9-188.245.94.123:22-68.220.241.50:44688.service: Deactivated successfully. Jan 23 00:10:42.995043 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 00:10:42.998398 systemd-logind[1529]: Removed session 10. Jan 23 00:10:43.093710 systemd[1]: Started sshd@10-188.245.94.123:22-68.220.241.50:46338.service - OpenSSH per-connection server daemon (68.220.241.50:46338). Jan 23 00:10:43.710948 sshd[4129]: Accepted publickey for core from 68.220.241.50 port 46338 ssh2: RSA SHA256:wScRSXm5JHKrAeSxAplDhSGBmu9+62e7CgH0oSNisYE Jan 23 00:10:43.711969 sshd-session[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:10:43.719230 systemd-logind[1529]: New session 11 of user core. Jan 23 00:10:43.730337 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 00:10:44.246371 sshd[4132]: Connection closed by 68.220.241.50 port 46338 Jan 23 00:10:44.247335 sshd-session[4129]: pam_unix(sshd:session): session closed for user core Jan 23 00:10:44.252523 systemd[1]: sshd@10-188.245.94.123:22-68.220.241.50:46338.service: Deactivated successfully. Jan 23 00:10:44.257326 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 00:10:44.258870 systemd-logind[1529]: Session 11 logged out. Waiting for processes to exit. Jan 23 00:10:44.261137 systemd-logind[1529]: Removed session 11. Jan 23 00:10:44.356458 systemd[1]: Started sshd@11-188.245.94.123:22-68.220.241.50:46354.service - OpenSSH per-connection server daemon (68.220.241.50:46354). Jan 23 00:10:44.977496 sshd[4142]: Accepted publickey for core from 68.220.241.50 port 46354 ssh2: RSA SHA256:wScRSXm5JHKrAeSxAplDhSGBmu9+62e7CgH0oSNisYE Jan 23 00:10:44.979593 sshd-session[4142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:10:44.984818 systemd-logind[1529]: New session 12 of user core. Jan 23 00:10:44.993425 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 00:10:45.480455 sshd[4145]: Connection closed by 68.220.241.50 port 46354 Jan 23 00:10:45.481269 sshd-session[4142]: pam_unix(sshd:session): session closed for user core Jan 23 00:10:45.488858 systemd[1]: sshd@11-188.245.94.123:22-68.220.241.50:46354.service: Deactivated successfully. Jan 23 00:10:45.493847 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 00:10:45.495264 systemd-logind[1529]: Session 12 logged out. Waiting for processes to exit. Jan 23 00:10:45.497905 systemd-logind[1529]: Removed session 12. Jan 23 00:10:50.595309 systemd[1]: Started sshd@12-188.245.94.123:22-68.220.241.50:46358.service - OpenSSH per-connection server daemon (68.220.241.50:46358). Jan 23 00:10:51.234166 sshd[4158]: Accepted publickey for core from 68.220.241.50 port 46358 ssh2: RSA SHA256:wScRSXm5JHKrAeSxAplDhSGBmu9+62e7CgH0oSNisYE Jan 23 00:10:51.235511 sshd-session[4158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:10:51.242065 systemd-logind[1529]: New session 13 of user core. Jan 23 00:10:51.248368 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 00:10:51.740856 sshd[4161]: Connection closed by 68.220.241.50 port 46358 Jan 23 00:10:51.741403 sshd-session[4158]: pam_unix(sshd:session): session closed for user core Jan 23 00:10:51.748299 systemd[1]: sshd@12-188.245.94.123:22-68.220.241.50:46358.service: Deactivated successfully. Jan 23 00:10:51.752474 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 00:10:51.754607 systemd-logind[1529]: Session 13 logged out. Waiting for processes to exit. Jan 23 00:10:51.757182 systemd-logind[1529]: Removed session 13. Jan 23 00:10:56.860752 systemd[1]: Started sshd@13-188.245.94.123:22-68.220.241.50:44916.service - OpenSSH per-connection server daemon (68.220.241.50:44916). Jan 23 00:10:57.509671 sshd[4172]: Accepted publickey for core from 68.220.241.50 port 44916 ssh2: RSA SHA256:wScRSXm5JHKrAeSxAplDhSGBmu9+62e7CgH0oSNisYE Jan 23 00:10:57.511796 sshd-session[4172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:10:57.517288 systemd-logind[1529]: New session 14 of user core. Jan 23 00:10:57.526325 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 00:10:58.033132 sshd[4175]: Connection closed by 68.220.241.50 port 44916 Jan 23 00:10:58.032969 sshd-session[4172]: pam_unix(sshd:session): session closed for user core Jan 23 00:10:58.037718 systemd[1]: sshd@13-188.245.94.123:22-68.220.241.50:44916.service: Deactivated successfully. Jan 23 00:10:58.039964 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 00:10:58.042318 systemd-logind[1529]: Session 14 logged out. Waiting for processes to exit. Jan 23 00:10:58.044374 systemd-logind[1529]: Removed session 14. Jan 23 00:10:58.144924 systemd[1]: Started sshd@14-188.245.94.123:22-68.220.241.50:44930.service - OpenSSH per-connection server daemon (68.220.241.50:44930). Jan 23 00:10:58.780121 sshd[4187]: Accepted publickey for core from 68.220.241.50 port 44930 ssh2: RSA SHA256:wScRSXm5JHKrAeSxAplDhSGBmu9+62e7CgH0oSNisYE Jan 23 00:10:58.782186 sshd-session[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:10:58.787151 systemd-logind[1529]: New session 15 of user core. Jan 23 00:10:58.794362 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 00:10:59.361395 sshd[4190]: Connection closed by 68.220.241.50 port 44930 Jan 23 00:10:59.362430 sshd-session[4187]: pam_unix(sshd:session): session closed for user core Jan 23 00:10:59.369399 systemd[1]: sshd@14-188.245.94.123:22-68.220.241.50:44930.service: Deactivated successfully. Jan 23 00:10:59.373298 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 00:10:59.375353 systemd-logind[1529]: Session 15 logged out. Waiting for processes to exit. Jan 23 00:10:59.378248 systemd-logind[1529]: Removed session 15. Jan 23 00:10:59.471378 systemd[1]: Started sshd@15-188.245.94.123:22-68.220.241.50:44936.service - OpenSSH per-connection server daemon (68.220.241.50:44936). Jan 23 00:11:00.093723 sshd[4200]: Accepted publickey for core from 68.220.241.50 port 44936 ssh2: RSA SHA256:wScRSXm5JHKrAeSxAplDhSGBmu9+62e7CgH0oSNisYE Jan 23 00:11:00.095891 sshd-session[4200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:11:00.101144 systemd-logind[1529]: New session 16 of user core. Jan 23 00:11:00.106309 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 00:11:01.230469 sshd[4203]: Connection closed by 68.220.241.50 port 44936 Jan 23 00:11:01.232527 sshd-session[4200]: pam_unix(sshd:session): session closed for user core Jan 23 00:11:01.238809 systemd[1]: sshd@15-188.245.94.123:22-68.220.241.50:44936.service: Deactivated successfully. Jan 23 00:11:01.241917 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 00:11:01.243157 systemd-logind[1529]: Session 16 logged out. Waiting for processes to exit. Jan 23 00:11:01.245397 systemd-logind[1529]: Removed session 16. Jan 23 00:11:01.344812 systemd[1]: Started sshd@16-188.245.94.123:22-68.220.241.50:44952.service - OpenSSH per-connection server daemon (68.220.241.50:44952). Jan 23 00:11:01.979473 sshd[4220]: Accepted publickey for core from 68.220.241.50 port 44952 ssh2: RSA SHA256:wScRSXm5JHKrAeSxAplDhSGBmu9+62e7CgH0oSNisYE Jan 23 00:11:01.982226 sshd-session[4220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:11:01.989342 systemd-logind[1529]: New session 17 of user core. Jan 23 00:11:01.993355 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 00:11:02.610400 sshd[4223]: Connection closed by 68.220.241.50 port 44952 Jan 23 00:11:02.609780 sshd-session[4220]: pam_unix(sshd:session): session closed for user core Jan 23 00:11:02.616600 systemd[1]: sshd@16-188.245.94.123:22-68.220.241.50:44952.service: Deactivated successfully. Jan 23 00:11:02.621670 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 00:11:02.624150 systemd-logind[1529]: Session 17 logged out. Waiting for processes to exit. Jan 23 00:11:02.625917 systemd-logind[1529]: Removed session 17. Jan 23 00:11:02.720025 systemd[1]: Started sshd@17-188.245.94.123:22-68.220.241.50:37484.service - OpenSSH per-connection server daemon (68.220.241.50:37484). Jan 23 00:11:03.355713 sshd[4233]: Accepted publickey for core from 68.220.241.50 port 37484 ssh2: RSA SHA256:wScRSXm5JHKrAeSxAplDhSGBmu9+62e7CgH0oSNisYE Jan 23 00:11:03.358074 sshd-session[4233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:11:03.363663 systemd-logind[1529]: New session 18 of user core. Jan 23 00:11:03.373329 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 00:11:03.865383 sshd[4236]: Connection closed by 68.220.241.50 port 37484 Jan 23 00:11:03.866595 sshd-session[4233]: pam_unix(sshd:session): session closed for user core Jan 23 00:11:03.872907 systemd-logind[1529]: Session 18 logged out. Waiting for processes to exit. Jan 23 00:11:03.874193 systemd[1]: sshd@17-188.245.94.123:22-68.220.241.50:37484.service: Deactivated successfully. Jan 23 00:11:03.876568 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 00:11:03.878589 systemd-logind[1529]: Removed session 18. Jan 23 00:11:08.994523 systemd[1]: Started sshd@18-188.245.94.123:22-68.220.241.50:37498.service - OpenSSH per-connection server daemon (68.220.241.50:37498). Jan 23 00:11:09.660040 sshd[4250]: Accepted publickey for core from 68.220.241.50 port 37498 ssh2: RSA SHA256:wScRSXm5JHKrAeSxAplDhSGBmu9+62e7CgH0oSNisYE Jan 23 00:11:09.662225 sshd-session[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:11:09.668312 systemd-logind[1529]: New session 19 of user core. Jan 23 00:11:09.676468 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 00:11:10.185774 sshd[4253]: Connection closed by 68.220.241.50 port 37498 Jan 23 00:11:10.185284 sshd-session[4250]: pam_unix(sshd:session): session closed for user core Jan 23 00:11:10.191032 systemd[1]: sshd@18-188.245.94.123:22-68.220.241.50:37498.service: Deactivated successfully. Jan 23 00:11:10.195798 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 00:11:10.197654 systemd-logind[1529]: Session 19 logged out. Waiting for processes to exit. Jan 23 00:11:10.199466 systemd-logind[1529]: Removed session 19. Jan 23 00:11:15.294008 systemd[1]: Started sshd@19-188.245.94.123:22-68.220.241.50:34720.service - OpenSSH per-connection server daemon (68.220.241.50:34720). Jan 23 00:11:15.927139 sshd[4267]: Accepted publickey for core from 68.220.241.50 port 34720 ssh2: RSA SHA256:wScRSXm5JHKrAeSxAplDhSGBmu9+62e7CgH0oSNisYE Jan 23 00:11:15.928533 sshd-session[4267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:11:15.933998 systemd-logind[1529]: New session 20 of user core. Jan 23 00:11:15.938508 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 00:11:16.440978 sshd[4272]: Connection closed by 68.220.241.50 port 34720 Jan 23 00:11:16.441759 sshd-session[4267]: pam_unix(sshd:session): session closed for user core Jan 23 00:11:16.447644 systemd[1]: sshd@19-188.245.94.123:22-68.220.241.50:34720.service: Deactivated successfully. Jan 23 00:11:16.447690 systemd-logind[1529]: Session 20 logged out. Waiting for processes to exit. Jan 23 00:11:16.452734 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 00:11:16.456856 systemd-logind[1529]: Removed session 20. Jan 23 00:11:21.557484 systemd[1]: Started sshd@20-188.245.94.123:22-68.220.241.50:34722.service - OpenSSH per-connection server daemon (68.220.241.50:34722). Jan 23 00:11:22.192750 sshd[4284]: Accepted publickey for core from 68.220.241.50 port 34722 ssh2: RSA SHA256:wScRSXm5JHKrAeSxAplDhSGBmu9+62e7CgH0oSNisYE Jan 23 00:11:22.195240 sshd-session[4284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:11:22.201369 systemd-logind[1529]: New session 21 of user core. Jan 23 00:11:22.216486 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 00:11:22.694112 sshd[4287]: Connection closed by 68.220.241.50 port 34722 Jan 23 00:11:22.695254 sshd-session[4284]: pam_unix(sshd:session): session closed for user core Jan 23 00:11:22.702806 systemd[1]: sshd@20-188.245.94.123:22-68.220.241.50:34722.service: Deactivated successfully. Jan 23 00:11:22.705513 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 00:11:22.706295 systemd-logind[1529]: Session 21 logged out. Waiting for processes to exit. Jan 23 00:11:22.707717 systemd-logind[1529]: Removed session 21. Jan 23 00:11:22.804482 systemd[1]: Started sshd@21-188.245.94.123:22-68.220.241.50:60566.service - OpenSSH per-connection server daemon (68.220.241.50:60566). Jan 23 00:11:23.446826 sshd[4299]: Accepted publickey for core from 68.220.241.50 port 60566 ssh2: RSA SHA256:wScRSXm5JHKrAeSxAplDhSGBmu9+62e7CgH0oSNisYE Jan 23 00:11:23.449206 sshd-session[4299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:11:23.455362 systemd-logind[1529]: New session 22 of user core. Jan 23 00:11:23.465414 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 00:11:25.820244 containerd[1545]: time="2026-01-23T00:11:25.819988283Z" level=info msg="StopContainer for \"2dcb66d5818d277444299f76b6f80912511f46bbbbacadcbf39469c55dc503dc\" with timeout 30 (s)" Jan 23 00:11:25.823418 containerd[1545]: time="2026-01-23T00:11:25.823307400Z" level=info msg="Stop container \"2dcb66d5818d277444299f76b6f80912511f46bbbbacadcbf39469c55dc503dc\" with signal terminated" Jan 23 00:11:25.840841 systemd[1]: cri-containerd-2dcb66d5818d277444299f76b6f80912511f46bbbbacadcbf39469c55dc503dc.scope: Deactivated successfully. Jan 23 00:11:25.843286 containerd[1545]: time="2026-01-23T00:11:25.843234623Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 00:11:25.844958 containerd[1545]: time="2026-01-23T00:11:25.844920141Z" level=info msg="received container exit event container_id:\"2dcb66d5818d277444299f76b6f80912511f46bbbbacadcbf39469c55dc503dc\" id:\"2dcb66d5818d277444299f76b6f80912511f46bbbbacadcbf39469c55dc503dc\" pid:3277 exited_at:{seconds:1769127085 nanos:844625942}" Jan 23 00:11:25.851848 containerd[1545]: time="2026-01-23T00:11:25.851711416Z" level=info msg="StopContainer for \"17b35f5f89e99d752eb336f60cb9421f3bd52f7f4cc6048851b3dca359903d5a\" with timeout 2 (s)" Jan 23 00:11:25.852776 containerd[1545]: time="2026-01-23T00:11:25.852744455Z" level=info msg="Stop container \"17b35f5f89e99d752eb336f60cb9421f3bd52f7f4cc6048851b3dca359903d5a\" with signal terminated" Jan 23 00:11:25.864586 systemd-networkd[1418]: lxc_health: Link DOWN Jan 23 00:11:25.864594 systemd-networkd[1418]: lxc_health: Lost carrier Jan 23 00:11:25.893425 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2dcb66d5818d277444299f76b6f80912511f46bbbbacadcbf39469c55dc503dc-rootfs.mount: Deactivated successfully. Jan 23 00:11:25.896496 systemd[1]: cri-containerd-17b35f5f89e99d752eb336f60cb9421f3bd52f7f4cc6048851b3dca359903d5a.scope: Deactivated successfully. Jan 23 00:11:25.897066 containerd[1545]: time="2026-01-23T00:11:25.897034257Z" level=info msg="received container exit event container_id:\"17b35f5f89e99d752eb336f60cb9421f3bd52f7f4cc6048851b3dca359903d5a\" id:\"17b35f5f89e99d752eb336f60cb9421f3bd52f7f4cc6048851b3dca359903d5a\" pid:3390 exited_at:{seconds:1769127085 nanos:895705539}" Jan 23 00:11:25.897265 systemd[1]: cri-containerd-17b35f5f89e99d752eb336f60cb9421f3bd52f7f4cc6048851b3dca359903d5a.scope: Consumed 6.981s CPU time, 127.1M memory peak, 128K read from disk, 12.9M written to disk. Jan 23 00:11:25.920628 containerd[1545]: time="2026-01-23T00:11:25.920562198Z" level=info msg="StopContainer for \"2dcb66d5818d277444299f76b6f80912511f46bbbbacadcbf39469c55dc503dc\" returns successfully" Jan 23 00:11:25.921913 containerd[1545]: time="2026-01-23T00:11:25.921676317Z" level=info msg="StopPodSandbox for \"8116d929c61066cd6f907f891ca0f9a54ea6559c60dece2e54edefb3da08da1a\"" Jan 23 00:11:25.921913 containerd[1545]: time="2026-01-23T00:11:25.921740597Z" level=info msg="Container to stop \"2dcb66d5818d277444299f76b6f80912511f46bbbbacadcbf39469c55dc503dc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 00:11:25.924133 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17b35f5f89e99d752eb336f60cb9421f3bd52f7f4cc6048851b3dca359903d5a-rootfs.mount: Deactivated successfully. Jan 23 00:11:25.933099 systemd[1]: cri-containerd-8116d929c61066cd6f907f891ca0f9a54ea6559c60dece2e54edefb3da08da1a.scope: Deactivated successfully. Jan 23 00:11:25.936309 containerd[1545]: time="2026-01-23T00:11:25.936173984Z" level=info msg="received sandbox exit event container_id:\"8116d929c61066cd6f907f891ca0f9a54ea6559c60dece2e54edefb3da08da1a\" id:\"8116d929c61066cd6f907f891ca0f9a54ea6559c60dece2e54edefb3da08da1a\" exit_status:137 exited_at:{seconds:1769127085 nanos:935752705}" monitor_name=podsandbox Jan 23 00:11:25.939676 containerd[1545]: time="2026-01-23T00:11:25.939640181Z" level=info msg="StopContainer for \"17b35f5f89e99d752eb336f60cb9421f3bd52f7f4cc6048851b3dca359903d5a\" returns successfully" Jan 23 00:11:25.940911 containerd[1545]: time="2026-01-23T00:11:25.940310421Z" level=info msg="StopPodSandbox for \"1bbf383ef46e230cb52a351acd38c2c8a4af2a0d829d4bbf32a77c6900a98606\"" Jan 23 00:11:25.940911 containerd[1545]: time="2026-01-23T00:11:25.940372501Z" level=info msg="Container to stop \"d7a4ccc5053684a4f0fdfd37ba2746f369dc6c192fa5949b44ff348a036662a5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 00:11:25.940911 containerd[1545]: time="2026-01-23T00:11:25.940383581Z" level=info msg="Container to stop \"17b35f5f89e99d752eb336f60cb9421f3bd52f7f4cc6048851b3dca359903d5a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 00:11:25.940911 containerd[1545]: time="2026-01-23T00:11:25.940392661Z" level=info msg="Container to stop \"509811dd54cf9b3ba243ac64e08b6b8b93e317ede7c42c353ac583c74119d733\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 00:11:25.940911 containerd[1545]: time="2026-01-23T00:11:25.940401661Z" level=info msg="Container to stop \"a296809a47dd54e86e7c24677c4a8fdc315c4e392689fe74f4384d135a838fd1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 00:11:25.940911 containerd[1545]: time="2026-01-23T00:11:25.940409861Z" level=info msg="Container to stop \"59ce2086b9178abfd6975bc4edcc8b58be672525536435e6e376fcd517bc468b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 00:11:25.955027 systemd[1]: cri-containerd-1bbf383ef46e230cb52a351acd38c2c8a4af2a0d829d4bbf32a77c6900a98606.scope: Deactivated successfully. Jan 23 00:11:25.957276 containerd[1545]: time="2026-01-23T00:11:25.957222687Z" level=info msg="received sandbox exit event container_id:\"1bbf383ef46e230cb52a351acd38c2c8a4af2a0d829d4bbf32a77c6900a98606\" id:\"1bbf383ef46e230cb52a351acd38c2c8a4af2a0d829d4bbf32a77c6900a98606\" exit_status:137 exited_at:{seconds:1769127085 nanos:956665327}" monitor_name=podsandbox Jan 23 00:11:25.976046 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8116d929c61066cd6f907f891ca0f9a54ea6559c60dece2e54edefb3da08da1a-rootfs.mount: Deactivated successfully. Jan 23 00:11:25.984158 containerd[1545]: time="2026-01-23T00:11:25.984066464Z" level=info msg="shim disconnected" id=8116d929c61066cd6f907f891ca0f9a54ea6559c60dece2e54edefb3da08da1a namespace=k8s.io Jan 23 00:11:25.984158 containerd[1545]: time="2026-01-23T00:11:25.984123504Z" level=warning msg="cleaning up after shim disconnected" id=8116d929c61066cd6f907f891ca0f9a54ea6559c60dece2e54edefb3da08da1a namespace=k8s.io Jan 23 00:11:25.984158 containerd[1545]: time="2026-01-23T00:11:25.984153704Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 00:11:25.991183 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1bbf383ef46e230cb52a351acd38c2c8a4af2a0d829d4bbf32a77c6900a98606-rootfs.mount: Deactivated successfully. Jan 23 00:11:25.996915 containerd[1545]: time="2026-01-23T00:11:25.996749293Z" level=info msg="shim disconnected" id=1bbf383ef46e230cb52a351acd38c2c8a4af2a0d829d4bbf32a77c6900a98606 namespace=k8s.io Jan 23 00:11:25.996915 containerd[1545]: time="2026-01-23T00:11:25.996788533Z" level=warning msg="cleaning up after shim disconnected" id=1bbf383ef46e230cb52a351acd38c2c8a4af2a0d829d4bbf32a77c6900a98606 namespace=k8s.io Jan 23 00:11:25.996915 containerd[1545]: time="2026-01-23T00:11:25.996821653Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 00:11:26.010291 containerd[1545]: time="2026-01-23T00:11:26.010208203Z" level=info msg="received sandbox container exit event sandbox_id:\"8116d929c61066cd6f907f891ca0f9a54ea6559c60dece2e54edefb3da08da1a\" exit_status:137 exited_at:{seconds:1769127085 nanos:935752705}" monitor_name=criService Jan 23 00:11:26.010520 containerd[1545]: time="2026-01-23T00:11:26.010435683Z" level=info msg="TearDown network for sandbox \"8116d929c61066cd6f907f891ca0f9a54ea6559c60dece2e54edefb3da08da1a\" successfully" Jan 23 00:11:26.010520 containerd[1545]: time="2026-01-23T00:11:26.010464203Z" level=info msg="StopPodSandbox for \"8116d929c61066cd6f907f891ca0f9a54ea6559c60dece2e54edefb3da08da1a\" returns successfully" Jan 23 00:11:26.012572 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8116d929c61066cd6f907f891ca0f9a54ea6559c60dece2e54edefb3da08da1a-shm.mount: Deactivated successfully. Jan 23 00:11:26.029155 containerd[1545]: time="2026-01-23T00:11:26.028557230Z" level=info msg="received sandbox container exit event sandbox_id:\"1bbf383ef46e230cb52a351acd38c2c8a4af2a0d829d4bbf32a77c6900a98606\" exit_status:137 exited_at:{seconds:1769127085 nanos:956665327}" monitor_name=criService Jan 23 00:11:26.029155 containerd[1545]: time="2026-01-23T00:11:26.028668590Z" level=info msg="TearDown network for sandbox \"1bbf383ef46e230cb52a351acd38c2c8a4af2a0d829d4bbf32a77c6900a98606\" successfully" Jan 23 00:11:26.029155 containerd[1545]: time="2026-01-23T00:11:26.028691870Z" level=info msg="StopPodSandbox for \"1bbf383ef46e230cb52a351acd38c2c8a4af2a0d829d4bbf32a77c6900a98606\" returns successfully" Jan 23 00:11:26.184281 kubelet[2760]: I0123 00:11:26.183475 2760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/701d00e6-1a2f-4263-ab42-5f03ef7ab716-cni-path\") pod \"701d00e6-1a2f-4263-ab42-5f03ef7ab716\" (UID: \"701d00e6-1a2f-4263-ab42-5f03ef7ab716\") " Jan 23 00:11:26.184281 kubelet[2760]: I0123 00:11:26.183521 2760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/701d00e6-1a2f-4263-ab42-5f03ef7ab716-clustermesh-secrets\") pod \"701d00e6-1a2f-4263-ab42-5f03ef7ab716\" (UID: \"701d00e6-1a2f-4263-ab42-5f03ef7ab716\") " Jan 23 00:11:26.184281 kubelet[2760]: I0123 00:11:26.183547 2760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3f5ce586-f31a-4287-8fbe-8804465503f3-cilium-config-path\") pod \"3f5ce586-f31a-4287-8fbe-8804465503f3\" (UID: \"3f5ce586-f31a-4287-8fbe-8804465503f3\") " Jan 23 00:11:26.184281 kubelet[2760]: I0123 00:11:26.183563 2760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/701d00e6-1a2f-4263-ab42-5f03ef7ab716-etc-cni-netd\") pod \"701d00e6-1a2f-4263-ab42-5f03ef7ab716\" (UID: \"701d00e6-1a2f-4263-ab42-5f03ef7ab716\") " Jan 23 00:11:26.184281 kubelet[2760]: I0123 00:11:26.183577 2760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/701d00e6-1a2f-4263-ab42-5f03ef7ab716-cilium-run\") pod \"701d00e6-1a2f-4263-ab42-5f03ef7ab716\" (UID: \"701d00e6-1a2f-4263-ab42-5f03ef7ab716\") " Jan 23 00:11:26.184281 kubelet[2760]: I0123 00:11:26.183594 2760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/701d00e6-1a2f-4263-ab42-5f03ef7ab716-cilium-config-path\") pod \"701d00e6-1a2f-4263-ab42-5f03ef7ab716\" (UID: \"701d00e6-1a2f-4263-ab42-5f03ef7ab716\") " Jan 23 00:11:26.184843 kubelet[2760]: I0123 00:11:26.183609 2760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/701d00e6-1a2f-4263-ab42-5f03ef7ab716-cilium-cgroup\") pod \"701d00e6-1a2f-4263-ab42-5f03ef7ab716\" (UID: \"701d00e6-1a2f-4263-ab42-5f03ef7ab716\") " Jan 23 00:11:26.184843 kubelet[2760]: I0123 00:11:26.183649 2760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/701d00e6-1a2f-4263-ab42-5f03ef7ab716-host-proc-sys-net\") pod \"701d00e6-1a2f-4263-ab42-5f03ef7ab716\" (UID: \"701d00e6-1a2f-4263-ab42-5f03ef7ab716\") " Jan 23 00:11:26.184843 kubelet[2760]: I0123 00:11:26.183665 2760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/701d00e6-1a2f-4263-ab42-5f03ef7ab716-lib-modules\") pod \"701d00e6-1a2f-4263-ab42-5f03ef7ab716\" (UID: \"701d00e6-1a2f-4263-ab42-5f03ef7ab716\") " Jan 23 00:11:26.184843 kubelet[2760]: I0123 00:11:26.183681 2760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/701d00e6-1a2f-4263-ab42-5f03ef7ab716-host-proc-sys-kernel\") pod \"701d00e6-1a2f-4263-ab42-5f03ef7ab716\" (UID: \"701d00e6-1a2f-4263-ab42-5f03ef7ab716\") " Jan 23 00:11:26.184843 kubelet[2760]: I0123 00:11:26.183698 2760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/701d00e6-1a2f-4263-ab42-5f03ef7ab716-hostproc\") pod \"701d00e6-1a2f-4263-ab42-5f03ef7ab716\" (UID: \"701d00e6-1a2f-4263-ab42-5f03ef7ab716\") " Jan 23 00:11:26.184843 kubelet[2760]: I0123 00:11:26.183715 2760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xc6p\" (UniqueName: \"kubernetes.io/projected/3f5ce586-f31a-4287-8fbe-8804465503f3-kube-api-access-5xc6p\") pod \"3f5ce586-f31a-4287-8fbe-8804465503f3\" (UID: \"3f5ce586-f31a-4287-8fbe-8804465503f3\") " Jan 23 00:11:26.184971 kubelet[2760]: I0123 00:11:26.183731 2760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tdd6\" (UniqueName: \"kubernetes.io/projected/701d00e6-1a2f-4263-ab42-5f03ef7ab716-kube-api-access-4tdd6\") pod \"701d00e6-1a2f-4263-ab42-5f03ef7ab716\" (UID: \"701d00e6-1a2f-4263-ab42-5f03ef7ab716\") " Jan 23 00:11:26.184971 kubelet[2760]: I0123 00:11:26.183752 2760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/701d00e6-1a2f-4263-ab42-5f03ef7ab716-hubble-tls\") pod \"701d00e6-1a2f-4263-ab42-5f03ef7ab716\" (UID: \"701d00e6-1a2f-4263-ab42-5f03ef7ab716\") " Jan 23 00:11:26.184971 kubelet[2760]: I0123 00:11:26.183768 2760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/701d00e6-1a2f-4263-ab42-5f03ef7ab716-bpf-maps\") pod \"701d00e6-1a2f-4263-ab42-5f03ef7ab716\" (UID: \"701d00e6-1a2f-4263-ab42-5f03ef7ab716\") " Jan 23 00:11:26.184971 kubelet[2760]: I0123 00:11:26.183783 2760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/701d00e6-1a2f-4263-ab42-5f03ef7ab716-xtables-lock\") pod \"701d00e6-1a2f-4263-ab42-5f03ef7ab716\" (UID: \"701d00e6-1a2f-4263-ab42-5f03ef7ab716\") " Jan 23 00:11:26.184971 kubelet[2760]: I0123 00:11:26.183852 2760 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/701d00e6-1a2f-4263-ab42-5f03ef7ab716-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "701d00e6-1a2f-4263-ab42-5f03ef7ab716" (UID: "701d00e6-1a2f-4263-ab42-5f03ef7ab716"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 00:11:26.184971 kubelet[2760]: I0123 00:11:26.183884 2760 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/701d00e6-1a2f-4263-ab42-5f03ef7ab716-cni-path" (OuterVolumeSpecName: "cni-path") pod "701d00e6-1a2f-4263-ab42-5f03ef7ab716" (UID: "701d00e6-1a2f-4263-ab42-5f03ef7ab716"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 00:11:26.186230 kubelet[2760]: I0123 00:11:26.185171 2760 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/701d00e6-1a2f-4263-ab42-5f03ef7ab716-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "701d00e6-1a2f-4263-ab42-5f03ef7ab716" (UID: "701d00e6-1a2f-4263-ab42-5f03ef7ab716"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 00:11:26.187201 kubelet[2760]: I0123 00:11:26.187149 2760 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/701d00e6-1a2f-4263-ab42-5f03ef7ab716-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "701d00e6-1a2f-4263-ab42-5f03ef7ab716" (UID: "701d00e6-1a2f-4263-ab42-5f03ef7ab716"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 00:11:26.187387 kubelet[2760]: I0123 00:11:26.187361 2760 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/701d00e6-1a2f-4263-ab42-5f03ef7ab716-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "701d00e6-1a2f-4263-ab42-5f03ef7ab716" (UID: "701d00e6-1a2f-4263-ab42-5f03ef7ab716"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 00:11:26.187608 kubelet[2760]: I0123 00:11:26.187550 2760 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/701d00e6-1a2f-4263-ab42-5f03ef7ab716-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "701d00e6-1a2f-4263-ab42-5f03ef7ab716" (UID: "701d00e6-1a2f-4263-ab42-5f03ef7ab716"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 00:11:26.188823 kubelet[2760]: I0123 00:11:26.188802 2760 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/701d00e6-1a2f-4263-ab42-5f03ef7ab716-hostproc" (OuterVolumeSpecName: "hostproc") pod "701d00e6-1a2f-4263-ab42-5f03ef7ab716" (UID: "701d00e6-1a2f-4263-ab42-5f03ef7ab716"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 00:11:26.189283 kubelet[2760]: I0123 00:11:26.189223 2760 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/701d00e6-1a2f-4263-ab42-5f03ef7ab716-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "701d00e6-1a2f-4263-ab42-5f03ef7ab716" (UID: "701d00e6-1a2f-4263-ab42-5f03ef7ab716"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 00:11:26.192325 kubelet[2760]: I0123 00:11:26.189560 2760 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/701d00e6-1a2f-4263-ab42-5f03ef7ab716-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "701d00e6-1a2f-4263-ab42-5f03ef7ab716" (UID: "701d00e6-1a2f-4263-ab42-5f03ef7ab716"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 00:11:26.195419 kubelet[2760]: I0123 00:11:26.194208 2760 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/701d00e6-1a2f-4263-ab42-5f03ef7ab716-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "701d00e6-1a2f-4263-ab42-5f03ef7ab716" (UID: "701d00e6-1a2f-4263-ab42-5f03ef7ab716"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 00:11:26.196758 kubelet[2760]: I0123 00:11:26.196700 2760 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/701d00e6-1a2f-4263-ab42-5f03ef7ab716-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "701d00e6-1a2f-4263-ab42-5f03ef7ab716" (UID: "701d00e6-1a2f-4263-ab42-5f03ef7ab716"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 00:11:26.197837 kubelet[2760]: I0123 00:11:26.197803 2760 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f5ce586-f31a-4287-8fbe-8804465503f3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3f5ce586-f31a-4287-8fbe-8804465503f3" (UID: "3f5ce586-f31a-4287-8fbe-8804465503f3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 00:11:26.201668 kubelet[2760]: I0123 00:11:26.201299 2760 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f5ce586-f31a-4287-8fbe-8804465503f3-kube-api-access-5xc6p" (OuterVolumeSpecName: "kube-api-access-5xc6p") pod "3f5ce586-f31a-4287-8fbe-8804465503f3" (UID: "3f5ce586-f31a-4287-8fbe-8804465503f3"). InnerVolumeSpecName "kube-api-access-5xc6p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 00:11:26.201846 kubelet[2760]: I0123 00:11:26.199041 2760 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/701d00e6-1a2f-4263-ab42-5f03ef7ab716-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "701d00e6-1a2f-4263-ab42-5f03ef7ab716" (UID: "701d00e6-1a2f-4263-ab42-5f03ef7ab716"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 00:11:26.202021 kubelet[2760]: I0123 00:11:26.201999 2760 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/701d00e6-1a2f-4263-ab42-5f03ef7ab716-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "701d00e6-1a2f-4263-ab42-5f03ef7ab716" (UID: "701d00e6-1a2f-4263-ab42-5f03ef7ab716"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 00:11:26.202395 kubelet[2760]: I0123 00:11:26.202366 2760 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/701d00e6-1a2f-4263-ab42-5f03ef7ab716-kube-api-access-4tdd6" (OuterVolumeSpecName: "kube-api-access-4tdd6") pod "701d00e6-1a2f-4263-ab42-5f03ef7ab716" (UID: "701d00e6-1a2f-4263-ab42-5f03ef7ab716"). InnerVolumeSpecName "kube-api-access-4tdd6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 00:11:26.284926 kubelet[2760]: I0123 00:11:26.284863 2760 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/701d00e6-1a2f-4263-ab42-5f03ef7ab716-hostproc\") on node \"ci-4459-2-2-n-105ad3c88f\" DevicePath \"\"" Jan 23 00:11:26.285406 kubelet[2760]: I0123 00:11:26.285228 2760 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5xc6p\" (UniqueName: \"kubernetes.io/projected/3f5ce586-f31a-4287-8fbe-8804465503f3-kube-api-access-5xc6p\") on node \"ci-4459-2-2-n-105ad3c88f\" DevicePath \"\"" Jan 23 00:11:26.285406 kubelet[2760]: I0123 00:11:26.285348 2760 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4tdd6\" (UniqueName: \"kubernetes.io/projected/701d00e6-1a2f-4263-ab42-5f03ef7ab716-kube-api-access-4tdd6\") on node \"ci-4459-2-2-n-105ad3c88f\" DevicePath \"\"" Jan 23 00:11:26.285625 kubelet[2760]: I0123 00:11:26.285380 2760 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/701d00e6-1a2f-4263-ab42-5f03ef7ab716-xtables-lock\") on node \"ci-4459-2-2-n-105ad3c88f\" DevicePath \"\"" Jan 23 00:11:26.285625 kubelet[2760]: I0123 00:11:26.285504 2760 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/701d00e6-1a2f-4263-ab42-5f03ef7ab716-hubble-tls\") on node \"ci-4459-2-2-n-105ad3c88f\" DevicePath \"\"" Jan 23 00:11:26.285625 kubelet[2760]: I0123 00:11:26.285525 2760 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/701d00e6-1a2f-4263-ab42-5f03ef7ab716-bpf-maps\") on node \"ci-4459-2-2-n-105ad3c88f\" DevicePath \"\"" Jan 23 00:11:26.285625 kubelet[2760]: I0123 00:11:26.285545 2760 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/701d00e6-1a2f-4263-ab42-5f03ef7ab716-cni-path\") on node \"ci-4459-2-2-n-105ad3c88f\" DevicePath \"\"" Jan 23 00:11:26.285925 kubelet[2760]: I0123 00:11:26.285768 2760 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/701d00e6-1a2f-4263-ab42-5f03ef7ab716-clustermesh-secrets\") on node \"ci-4459-2-2-n-105ad3c88f\" DevicePath \"\"" Jan 23 00:11:26.285925 kubelet[2760]: I0123 00:11:26.285788 2760 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/701d00e6-1a2f-4263-ab42-5f03ef7ab716-etc-cni-netd\") on node \"ci-4459-2-2-n-105ad3c88f\" DevicePath \"\"" Jan 23 00:11:26.285925 kubelet[2760]: I0123 00:11:26.285801 2760 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3f5ce586-f31a-4287-8fbe-8804465503f3-cilium-config-path\") on node \"ci-4459-2-2-n-105ad3c88f\" DevicePath \"\"" Jan 23 00:11:26.285925 kubelet[2760]: I0123 00:11:26.285811 2760 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/701d00e6-1a2f-4263-ab42-5f03ef7ab716-cilium-run\") on node \"ci-4459-2-2-n-105ad3c88f\" DevicePath \"\"" Jan 23 00:11:26.285925 kubelet[2760]: I0123 00:11:26.285837 2760 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/701d00e6-1a2f-4263-ab42-5f03ef7ab716-cilium-config-path\") on node \"ci-4459-2-2-n-105ad3c88f\" DevicePath \"\"" Jan 23 00:11:26.285925 kubelet[2760]: I0123 00:11:26.285851 2760 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/701d00e6-1a2f-4263-ab42-5f03ef7ab716-cilium-cgroup\") on node \"ci-4459-2-2-n-105ad3c88f\" DevicePath \"\"" Jan 23 00:11:26.285925 kubelet[2760]: I0123 00:11:26.285863 2760 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/701d00e6-1a2f-4263-ab42-5f03ef7ab716-host-proc-sys-net\") on node \"ci-4459-2-2-n-105ad3c88f\" DevicePath \"\"" Jan 23 00:11:26.285925 kubelet[2760]: I0123 00:11:26.285879 2760 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/701d00e6-1a2f-4263-ab42-5f03ef7ab716-lib-modules\") on node \"ci-4459-2-2-n-105ad3c88f\" DevicePath \"\"" Jan 23 00:11:26.286221 kubelet[2760]: I0123 00:11:26.285892 2760 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/701d00e6-1a2f-4263-ab42-5f03ef7ab716-host-proc-sys-kernel\") on node \"ci-4459-2-2-n-105ad3c88f\" DevicePath \"\"" Jan 23 00:11:26.758309 kubelet[2760]: I0123 00:11:26.758233 2760 scope.go:117] "RemoveContainer" containerID="2dcb66d5818d277444299f76b6f80912511f46bbbbacadcbf39469c55dc503dc" Jan 23 00:11:26.763119 containerd[1545]: time="2026-01-23T00:11:26.762201499Z" level=info msg="RemoveContainer for \"2dcb66d5818d277444299f76b6f80912511f46bbbbacadcbf39469c55dc503dc\"" Jan 23 00:11:26.766389 systemd[1]: Removed slice kubepods-besteffort-pod3f5ce586_f31a_4287_8fbe_8804465503f3.slice - libcontainer container kubepods-besteffort-pod3f5ce586_f31a_4287_8fbe_8804465503f3.slice. Jan 23 00:11:26.771179 containerd[1545]: time="2026-01-23T00:11:26.771138773Z" level=info msg="RemoveContainer for \"2dcb66d5818d277444299f76b6f80912511f46bbbbacadcbf39469c55dc503dc\" returns successfully" Jan 23 00:11:26.772011 kubelet[2760]: I0123 00:11:26.771979 2760 scope.go:117] "RemoveContainer" containerID="2dcb66d5818d277444299f76b6f80912511f46bbbbacadcbf39469c55dc503dc" Jan 23 00:11:26.772609 containerd[1545]: time="2026-01-23T00:11:26.772566972Z" level=error msg="ContainerStatus for \"2dcb66d5818d277444299f76b6f80912511f46bbbbacadcbf39469c55dc503dc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2dcb66d5818d277444299f76b6f80912511f46bbbbacadcbf39469c55dc503dc\": not found" Jan 23 00:11:26.772987 kubelet[2760]: E0123 00:11:26.772892 2760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2dcb66d5818d277444299f76b6f80912511f46bbbbacadcbf39469c55dc503dc\": not found" containerID="2dcb66d5818d277444299f76b6f80912511f46bbbbacadcbf39469c55dc503dc" Jan 23 00:11:26.773150 kubelet[2760]: I0123 00:11:26.772925 2760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2dcb66d5818d277444299f76b6f80912511f46bbbbacadcbf39469c55dc503dc"} err="failed to get container status \"2dcb66d5818d277444299f76b6f80912511f46bbbbacadcbf39469c55dc503dc\": rpc error: code = NotFound desc = an error occurred when try to find container \"2dcb66d5818d277444299f76b6f80912511f46bbbbacadcbf39469c55dc503dc\": not found" Jan 23 00:11:26.773867 kubelet[2760]: I0123 00:11:26.773846 2760 scope.go:117] "RemoveContainer" containerID="17b35f5f89e99d752eb336f60cb9421f3bd52f7f4cc6048851b3dca359903d5a" Jan 23 00:11:26.783811 systemd[1]: Removed slice kubepods-burstable-pod701d00e6_1a2f_4263_ab42_5f03ef7ab716.slice - libcontainer container kubepods-burstable-pod701d00e6_1a2f_4263_ab42_5f03ef7ab716.slice. Jan 23 00:11:26.783926 systemd[1]: kubepods-burstable-pod701d00e6_1a2f_4263_ab42_5f03ef7ab716.slice: Consumed 7.083s CPU time, 127.5M memory peak, 128K read from disk, 12.9M written to disk. Jan 23 00:11:26.787337 containerd[1545]: time="2026-01-23T00:11:26.786321162Z" level=info msg="RemoveContainer for \"17b35f5f89e99d752eb336f60cb9421f3bd52f7f4cc6048851b3dca359903d5a\"" Jan 23 00:11:26.791777 containerd[1545]: time="2026-01-23T00:11:26.791744358Z" level=info msg="RemoveContainer for \"17b35f5f89e99d752eb336f60cb9421f3bd52f7f4cc6048851b3dca359903d5a\" returns successfully" Jan 23 00:11:26.793769 kubelet[2760]: I0123 00:11:26.793734 2760 scope.go:117] "RemoveContainer" containerID="59ce2086b9178abfd6975bc4edcc8b58be672525536435e6e376fcd517bc468b" Jan 23 00:11:26.796680 containerd[1545]: time="2026-01-23T00:11:26.796650075Z" level=info msg="RemoveContainer for \"59ce2086b9178abfd6975bc4edcc8b58be672525536435e6e376fcd517bc468b\"" Jan 23 00:11:26.810280 containerd[1545]: time="2026-01-23T00:11:26.810200865Z" level=info msg="RemoveContainer for \"59ce2086b9178abfd6975bc4edcc8b58be672525536435e6e376fcd517bc468b\" returns successfully" Jan 23 00:11:26.810644 kubelet[2760]: I0123 00:11:26.810624 2760 scope.go:117] "RemoveContainer" containerID="a296809a47dd54e86e7c24677c4a8fdc315c4e392689fe74f4384d135a838fd1" Jan 23 00:11:26.815548 containerd[1545]: time="2026-01-23T00:11:26.815518141Z" level=info msg="RemoveContainer for \"a296809a47dd54e86e7c24677c4a8fdc315c4e392689fe74f4384d135a838fd1\"" Jan 23 00:11:26.820458 containerd[1545]: time="2026-01-23T00:11:26.820388017Z" level=info msg="RemoveContainer for \"a296809a47dd54e86e7c24677c4a8fdc315c4e392689fe74f4384d135a838fd1\" returns successfully" Jan 23 00:11:26.821109 kubelet[2760]: I0123 00:11:26.820955 2760 scope.go:117] "RemoveContainer" containerID="509811dd54cf9b3ba243ac64e08b6b8b93e317ede7c42c353ac583c74119d733" Jan 23 00:11:26.823291 containerd[1545]: time="2026-01-23T00:11:26.823219415Z" level=info msg="RemoveContainer for \"509811dd54cf9b3ba243ac64e08b6b8b93e317ede7c42c353ac583c74119d733\"" Jan 23 00:11:26.828554 containerd[1545]: time="2026-01-23T00:11:26.828516811Z" level=info msg="RemoveContainer for \"509811dd54cf9b3ba243ac64e08b6b8b93e317ede7c42c353ac583c74119d733\" returns successfully" Jan 23 00:11:26.828962 kubelet[2760]: I0123 00:11:26.828903 2760 scope.go:117] "RemoveContainer" containerID="d7a4ccc5053684a4f0fdfd37ba2746f369dc6c192fa5949b44ff348a036662a5" Jan 23 00:11:26.831103 containerd[1545]: time="2026-01-23T00:11:26.831027010Z" level=info msg="RemoveContainer for \"d7a4ccc5053684a4f0fdfd37ba2746f369dc6c192fa5949b44ff348a036662a5\"" Jan 23 00:11:26.834607 containerd[1545]: time="2026-01-23T00:11:26.834555647Z" level=info msg="RemoveContainer for \"d7a4ccc5053684a4f0fdfd37ba2746f369dc6c192fa5949b44ff348a036662a5\" returns successfully" Jan 23 00:11:26.834823 kubelet[2760]: I0123 00:11:26.834742 2760 scope.go:117] "RemoveContainer" containerID="17b35f5f89e99d752eb336f60cb9421f3bd52f7f4cc6048851b3dca359903d5a" Jan 23 00:11:26.835069 containerd[1545]: time="2026-01-23T00:11:26.835012447Z" level=error msg="ContainerStatus for \"17b35f5f89e99d752eb336f60cb9421f3bd52f7f4cc6048851b3dca359903d5a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"17b35f5f89e99d752eb336f60cb9421f3bd52f7f4cc6048851b3dca359903d5a\": not found" Jan 23 00:11:26.835217 kubelet[2760]: E0123 00:11:26.835188 2760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"17b35f5f89e99d752eb336f60cb9421f3bd52f7f4cc6048851b3dca359903d5a\": not found" containerID="17b35f5f89e99d752eb336f60cb9421f3bd52f7f4cc6048851b3dca359903d5a" Jan 23 00:11:26.835304 kubelet[2760]: I0123 00:11:26.835218 2760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"17b35f5f89e99d752eb336f60cb9421f3bd52f7f4cc6048851b3dca359903d5a"} err="failed to get container status \"17b35f5f89e99d752eb336f60cb9421f3bd52f7f4cc6048851b3dca359903d5a\": rpc error: code = NotFound desc = an error occurred when try to find container \"17b35f5f89e99d752eb336f60cb9421f3bd52f7f4cc6048851b3dca359903d5a\": not found" Jan 23 00:11:26.835304 kubelet[2760]: I0123 00:11:26.835255 2760 scope.go:117] "RemoveContainer" containerID="59ce2086b9178abfd6975bc4edcc8b58be672525536435e6e376fcd517bc468b" Jan 23 00:11:26.835526 containerd[1545]: time="2026-01-23T00:11:26.835445766Z" level=error msg="ContainerStatus for \"59ce2086b9178abfd6975bc4edcc8b58be672525536435e6e376fcd517bc468b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"59ce2086b9178abfd6975bc4edcc8b58be672525536435e6e376fcd517bc468b\": not found" Jan 23 00:11:26.835723 kubelet[2760]: E0123 00:11:26.835620 2760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"59ce2086b9178abfd6975bc4edcc8b58be672525536435e6e376fcd517bc468b\": not found" containerID="59ce2086b9178abfd6975bc4edcc8b58be672525536435e6e376fcd517bc468b" Jan 23 00:11:26.835723 kubelet[2760]: I0123 00:11:26.835649 2760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"59ce2086b9178abfd6975bc4edcc8b58be672525536435e6e376fcd517bc468b"} err="failed to get container status \"59ce2086b9178abfd6975bc4edcc8b58be672525536435e6e376fcd517bc468b\": rpc error: code = NotFound desc = an error occurred when try to find container \"59ce2086b9178abfd6975bc4edcc8b58be672525536435e6e376fcd517bc468b\": not found" Jan 23 00:11:26.835723 kubelet[2760]: I0123 00:11:26.835666 2760 scope.go:117] "RemoveContainer" containerID="a296809a47dd54e86e7c24677c4a8fdc315c4e392689fe74f4384d135a838fd1" Jan 23 00:11:26.836040 containerd[1545]: time="2026-01-23T00:11:26.835946406Z" level=error msg="ContainerStatus for \"a296809a47dd54e86e7c24677c4a8fdc315c4e392689fe74f4384d135a838fd1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a296809a47dd54e86e7c24677c4a8fdc315c4e392689fe74f4384d135a838fd1\": not found" Jan 23 00:11:26.836150 kubelet[2760]: E0123 00:11:26.836109 2760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a296809a47dd54e86e7c24677c4a8fdc315c4e392689fe74f4384d135a838fd1\": not found" containerID="a296809a47dd54e86e7c24677c4a8fdc315c4e392689fe74f4384d135a838fd1" Jan 23 00:11:26.836150 kubelet[2760]: I0123 00:11:26.836129 2760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a296809a47dd54e86e7c24677c4a8fdc315c4e392689fe74f4384d135a838fd1"} err="failed to get container status \"a296809a47dd54e86e7c24677c4a8fdc315c4e392689fe74f4384d135a838fd1\": rpc error: code = NotFound desc = an error occurred when try to find container \"a296809a47dd54e86e7c24677c4a8fdc315c4e392689fe74f4384d135a838fd1\": not found" Jan 23 00:11:26.836150 kubelet[2760]: I0123 00:11:26.836143 2760 scope.go:117] "RemoveContainer" containerID="509811dd54cf9b3ba243ac64e08b6b8b93e317ede7c42c353ac583c74119d733" Jan 23 00:11:26.836334 containerd[1545]: time="2026-01-23T00:11:26.836310006Z" level=error msg="ContainerStatus for \"509811dd54cf9b3ba243ac64e08b6b8b93e317ede7c42c353ac583c74119d733\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"509811dd54cf9b3ba243ac64e08b6b8b93e317ede7c42c353ac583c74119d733\": not found" Jan 23 00:11:26.836610 kubelet[2760]: E0123 00:11:26.836474 2760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"509811dd54cf9b3ba243ac64e08b6b8b93e317ede7c42c353ac583c74119d733\": not found" containerID="509811dd54cf9b3ba243ac64e08b6b8b93e317ede7c42c353ac583c74119d733" Jan 23 00:11:26.836610 kubelet[2760]: I0123 00:11:26.836502 2760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"509811dd54cf9b3ba243ac64e08b6b8b93e317ede7c42c353ac583c74119d733"} err="failed to get container status \"509811dd54cf9b3ba243ac64e08b6b8b93e317ede7c42c353ac583c74119d733\": rpc error: code = NotFound desc = an error occurred when try to find container \"509811dd54cf9b3ba243ac64e08b6b8b93e317ede7c42c353ac583c74119d733\": not found" Jan 23 00:11:26.836610 kubelet[2760]: I0123 00:11:26.836524 2760 scope.go:117] "RemoveContainer" containerID="d7a4ccc5053684a4f0fdfd37ba2746f369dc6c192fa5949b44ff348a036662a5" Jan 23 00:11:26.836748 containerd[1545]: time="2026-01-23T00:11:26.836690166Z" level=error msg="ContainerStatus for \"d7a4ccc5053684a4f0fdfd37ba2746f369dc6c192fa5949b44ff348a036662a5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d7a4ccc5053684a4f0fdfd37ba2746f369dc6c192fa5949b44ff348a036662a5\": not found" Jan 23 00:11:26.836921 kubelet[2760]: E0123 00:11:26.836871 2760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d7a4ccc5053684a4f0fdfd37ba2746f369dc6c192fa5949b44ff348a036662a5\": not found" containerID="d7a4ccc5053684a4f0fdfd37ba2746f369dc6c192fa5949b44ff348a036662a5" Jan 23 00:11:26.836921 kubelet[2760]: I0123 00:11:26.836901 2760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d7a4ccc5053684a4f0fdfd37ba2746f369dc6c192fa5949b44ff348a036662a5"} err="failed to get container status \"d7a4ccc5053684a4f0fdfd37ba2746f369dc6c192fa5949b44ff348a036662a5\": rpc error: code = NotFound desc = an error occurred when try to find container \"d7a4ccc5053684a4f0fdfd37ba2746f369dc6c192fa5949b44ff348a036662a5\": not found" Jan 23 00:11:26.893437 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1bbf383ef46e230cb52a351acd38c2c8a4af2a0d829d4bbf32a77c6900a98606-shm.mount: Deactivated successfully. Jan 23 00:11:26.893574 systemd[1]: var-lib-kubelet-pods-701d00e6\x2d1a2f\x2d4263\x2dab42\x2d5f03ef7ab716-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 23 00:11:26.893654 systemd[1]: var-lib-kubelet-pods-701d00e6\x2d1a2f\x2d4263\x2dab42\x2d5f03ef7ab716-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 23 00:11:26.893719 systemd[1]: var-lib-kubelet-pods-3f5ce586\x2df31a\x2d4287\x2d8fbe\x2d8804465503f3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5xc6p.mount: Deactivated successfully. Jan 23 00:11:26.893781 systemd[1]: var-lib-kubelet-pods-701d00e6\x2d1a2f\x2d4263\x2dab42\x2d5f03ef7ab716-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4tdd6.mount: Deactivated successfully. Jan 23 00:11:27.842942 sshd[4302]: Connection closed by 68.220.241.50 port 60566 Jan 23 00:11:27.843997 sshd-session[4299]: pam_unix(sshd:session): session closed for user core Jan 23 00:11:27.849819 systemd[1]: sshd@21-188.245.94.123:22-68.220.241.50:60566.service: Deactivated successfully. Jan 23 00:11:27.852498 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 00:11:27.852707 systemd[1]: session-22.scope: Consumed 1.395s CPU time, 23.6M memory peak. Jan 23 00:11:27.854151 systemd-logind[1529]: Session 22 logged out. Waiting for processes to exit. Jan 23 00:11:27.856430 systemd-logind[1529]: Removed session 22. Jan 23 00:11:27.955489 systemd[1]: Started sshd@22-188.245.94.123:22-68.220.241.50:60580.service - OpenSSH per-connection server daemon (68.220.241.50:60580). Jan 23 00:11:28.178947 kubelet[2760]: I0123 00:11:28.178901 2760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f5ce586-f31a-4287-8fbe-8804465503f3" path="/var/lib/kubelet/pods/3f5ce586-f31a-4287-8fbe-8804465503f3/volumes" Jan 23 00:11:28.179551 kubelet[2760]: I0123 00:11:28.179528 2760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="701d00e6-1a2f-4263-ab42-5f03ef7ab716" path="/var/lib/kubelet/pods/701d00e6-1a2f-4263-ab42-5f03ef7ab716/volumes" Jan 23 00:11:28.597193 sshd[4447]: Accepted publickey for core from 68.220.241.50 port 60580 ssh2: RSA SHA256:wScRSXm5JHKrAeSxAplDhSGBmu9+62e7CgH0oSNisYE Jan 23 00:11:28.598938 sshd-session[4447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:11:28.607332 systemd-logind[1529]: New session 23 of user core. Jan 23 00:11:28.611265 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 00:11:29.978812 kubelet[2760]: I0123 00:11:29.978721 2760 memory_manager.go:355] "RemoveStaleState removing state" podUID="3f5ce586-f31a-4287-8fbe-8804465503f3" containerName="cilium-operator" Jan 23 00:11:29.978812 kubelet[2760]: I0123 00:11:29.978751 2760 memory_manager.go:355] "RemoveStaleState removing state" podUID="701d00e6-1a2f-4263-ab42-5f03ef7ab716" containerName="cilium-agent" Jan 23 00:11:29.987622 systemd[1]: Created slice kubepods-burstable-pod1314f41d_ec2e_4496_a5c0_340147c1614b.slice - libcontainer container kubepods-burstable-pod1314f41d_ec2e_4496_a5c0_340147c1614b.slice. Jan 23 00:11:30.055264 sshd[4450]: Connection closed by 68.220.241.50 port 60580 Jan 23 00:11:30.055882 sshd-session[4447]: pam_unix(sshd:session): session closed for user core Jan 23 00:11:30.061890 systemd[1]: sshd@22-188.245.94.123:22-68.220.241.50:60580.service: Deactivated successfully. Jan 23 00:11:30.066764 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 00:11:30.068043 systemd-logind[1529]: Session 23 logged out. Waiting for processes to exit. Jan 23 00:11:30.070418 systemd-logind[1529]: Removed session 23. Jan 23 00:11:30.112415 kubelet[2760]: I0123 00:11:30.112342 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1314f41d-ec2e-4496-a5c0-340147c1614b-bpf-maps\") pod \"cilium-qg5ch\" (UID: \"1314f41d-ec2e-4496-a5c0-340147c1614b\") " pod="kube-system/cilium-qg5ch" Jan 23 00:11:30.112737 kubelet[2760]: I0123 00:11:30.112391 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1314f41d-ec2e-4496-a5c0-340147c1614b-hostproc\") pod \"cilium-qg5ch\" (UID: \"1314f41d-ec2e-4496-a5c0-340147c1614b\") " pod="kube-system/cilium-qg5ch" Jan 23 00:11:30.112737 kubelet[2760]: I0123 00:11:30.112623 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1314f41d-ec2e-4496-a5c0-340147c1614b-cilium-cgroup\") pod \"cilium-qg5ch\" (UID: \"1314f41d-ec2e-4496-a5c0-340147c1614b\") " pod="kube-system/cilium-qg5ch" Jan 23 00:11:30.112737 kubelet[2760]: I0123 00:11:30.112665 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1314f41d-ec2e-4496-a5c0-340147c1614b-clustermesh-secrets\") pod \"cilium-qg5ch\" (UID: \"1314f41d-ec2e-4496-a5c0-340147c1614b\") " pod="kube-system/cilium-qg5ch" Jan 23 00:11:30.112737 kubelet[2760]: I0123 00:11:30.112690 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1314f41d-ec2e-4496-a5c0-340147c1614b-cilium-config-path\") pod \"cilium-qg5ch\" (UID: \"1314f41d-ec2e-4496-a5c0-340147c1614b\") " pod="kube-system/cilium-qg5ch" Jan 23 00:11:30.112737 kubelet[2760]: I0123 00:11:30.112714 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wknfg\" (UniqueName: \"kubernetes.io/projected/1314f41d-ec2e-4496-a5c0-340147c1614b-kube-api-access-wknfg\") pod \"cilium-qg5ch\" (UID: \"1314f41d-ec2e-4496-a5c0-340147c1614b\") " pod="kube-system/cilium-qg5ch" Jan 23 00:11:30.113124 kubelet[2760]: I0123 00:11:30.112933 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1314f41d-ec2e-4496-a5c0-340147c1614b-lib-modules\") pod \"cilium-qg5ch\" (UID: \"1314f41d-ec2e-4496-a5c0-340147c1614b\") " pod="kube-system/cilium-qg5ch" Jan 23 00:11:30.113124 kubelet[2760]: I0123 00:11:30.112979 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1314f41d-ec2e-4496-a5c0-340147c1614b-hubble-tls\") pod \"cilium-qg5ch\" (UID: \"1314f41d-ec2e-4496-a5c0-340147c1614b\") " pod="kube-system/cilium-qg5ch" Jan 23 00:11:30.113124 kubelet[2760]: I0123 00:11:30.113008 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1314f41d-ec2e-4496-a5c0-340147c1614b-etc-cni-netd\") pod \"cilium-qg5ch\" (UID: \"1314f41d-ec2e-4496-a5c0-340147c1614b\") " pod="kube-system/cilium-qg5ch" Jan 23 00:11:30.113124 kubelet[2760]: I0123 00:11:30.113031 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1314f41d-ec2e-4496-a5c0-340147c1614b-cilium-ipsec-secrets\") pod \"cilium-qg5ch\" (UID: \"1314f41d-ec2e-4496-a5c0-340147c1614b\") " pod="kube-system/cilium-qg5ch" Jan 23 00:11:30.113553 kubelet[2760]: I0123 00:11:30.113340 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1314f41d-ec2e-4496-a5c0-340147c1614b-host-proc-sys-net\") pod \"cilium-qg5ch\" (UID: \"1314f41d-ec2e-4496-a5c0-340147c1614b\") " pod="kube-system/cilium-qg5ch" Jan 23 00:11:30.113553 kubelet[2760]: I0123 00:11:30.113400 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1314f41d-ec2e-4496-a5c0-340147c1614b-cilium-run\") pod \"cilium-qg5ch\" (UID: \"1314f41d-ec2e-4496-a5c0-340147c1614b\") " pod="kube-system/cilium-qg5ch" Jan 23 00:11:30.113553 kubelet[2760]: I0123 00:11:30.113458 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1314f41d-ec2e-4496-a5c0-340147c1614b-xtables-lock\") pod \"cilium-qg5ch\" (UID: \"1314f41d-ec2e-4496-a5c0-340147c1614b\") " pod="kube-system/cilium-qg5ch" Jan 23 00:11:30.113553 kubelet[2760]: I0123 00:11:30.113501 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1314f41d-ec2e-4496-a5c0-340147c1614b-cni-path\") pod \"cilium-qg5ch\" (UID: \"1314f41d-ec2e-4496-a5c0-340147c1614b\") " pod="kube-system/cilium-qg5ch" Jan 23 00:11:30.113553 kubelet[2760]: I0123 00:11:30.113522 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1314f41d-ec2e-4496-a5c0-340147c1614b-host-proc-sys-kernel\") pod \"cilium-qg5ch\" (UID: \"1314f41d-ec2e-4496-a5c0-340147c1614b\") " pod="kube-system/cilium-qg5ch" Jan 23 00:11:30.172381 systemd[1]: Started sshd@23-188.245.94.123:22-68.220.241.50:60584.service - OpenSSH per-connection server daemon (68.220.241.50:60584). Jan 23 00:11:30.290199 kubelet[2760]: E0123 00:11:30.289717 2760 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 00:11:30.292941 containerd[1545]: time="2026-01-23T00:11:30.292898997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qg5ch,Uid:1314f41d-ec2e-4496-a5c0-340147c1614b,Namespace:kube-system,Attempt:0,}" Jan 23 00:11:30.307947 containerd[1545]: time="2026-01-23T00:11:30.307896913Z" level=info msg="connecting to shim 471b8e1346a3c5afb00ee082000a7e34946bbba74579043416078f85bb5150fb" address="unix:///run/containerd/s/ed158fa4f9afcacf95b3b25274d0faeb8012e6c7256f5d8ec234351e53d13c18" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:11:30.331284 systemd[1]: Started cri-containerd-471b8e1346a3c5afb00ee082000a7e34946bbba74579043416078f85bb5150fb.scope - libcontainer container 471b8e1346a3c5afb00ee082000a7e34946bbba74579043416078f85bb5150fb. Jan 23 00:11:30.363664 containerd[1545]: time="2026-01-23T00:11:30.363589499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qg5ch,Uid:1314f41d-ec2e-4496-a5c0-340147c1614b,Namespace:kube-system,Attempt:0,} returns sandbox id \"471b8e1346a3c5afb00ee082000a7e34946bbba74579043416078f85bb5150fb\"" Jan 23 00:11:30.366776 containerd[1545]: time="2026-01-23T00:11:30.366731378Z" level=info msg="CreateContainer within sandbox \"471b8e1346a3c5afb00ee082000a7e34946bbba74579043416078f85bb5150fb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 00:11:30.373766 containerd[1545]: time="2026-01-23T00:11:30.373724536Z" level=info msg="Container 4eeb777c3823f51bf94d261bd27e665187e007eb36f98eb5d0ace91c8052fe49: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:11:30.379965 containerd[1545]: time="2026-01-23T00:11:30.379789495Z" level=info msg="CreateContainer within sandbox \"471b8e1346a3c5afb00ee082000a7e34946bbba74579043416078f85bb5150fb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4eeb777c3823f51bf94d261bd27e665187e007eb36f98eb5d0ace91c8052fe49\"" Jan 23 00:11:30.380648 containerd[1545]: time="2026-01-23T00:11:30.380545895Z" level=info msg="StartContainer for \"4eeb777c3823f51bf94d261bd27e665187e007eb36f98eb5d0ace91c8052fe49\"" Jan 23 00:11:30.381584 containerd[1545]: time="2026-01-23T00:11:30.381474814Z" level=info msg="connecting to shim 4eeb777c3823f51bf94d261bd27e665187e007eb36f98eb5d0ace91c8052fe49" address="unix:///run/containerd/s/ed158fa4f9afcacf95b3b25274d0faeb8012e6c7256f5d8ec234351e53d13c18" protocol=ttrpc version=3 Jan 23 00:11:30.404283 systemd[1]: Started cri-containerd-4eeb777c3823f51bf94d261bd27e665187e007eb36f98eb5d0ace91c8052fe49.scope - libcontainer container 4eeb777c3823f51bf94d261bd27e665187e007eb36f98eb5d0ace91c8052fe49. Jan 23 00:11:30.440305 containerd[1545]: time="2026-01-23T00:11:30.440260359Z" level=info msg="StartContainer for \"4eeb777c3823f51bf94d261bd27e665187e007eb36f98eb5d0ace91c8052fe49\" returns successfully" Jan 23 00:11:30.451709 systemd[1]: cri-containerd-4eeb777c3823f51bf94d261bd27e665187e007eb36f98eb5d0ace91c8052fe49.scope: Deactivated successfully. Jan 23 00:11:30.457816 containerd[1545]: time="2026-01-23T00:11:30.455134796Z" level=info msg="received container exit event container_id:\"4eeb777c3823f51bf94d261bd27e665187e007eb36f98eb5d0ace91c8052fe49\" id:\"4eeb777c3823f51bf94d261bd27e665187e007eb36f98eb5d0ace91c8052fe49\" pid:4525 exited_at:{seconds:1769127090 nanos:454631436}" Jan 23 00:11:30.795921 containerd[1545]: time="2026-01-23T00:11:30.795748389Z" level=info msg="CreateContainer within sandbox \"471b8e1346a3c5afb00ee082000a7e34946bbba74579043416078f85bb5150fb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 00:11:30.804526 containerd[1545]: time="2026-01-23T00:11:30.804456107Z" level=info msg="Container ba3c2783876ed67eb78cdeb136f28bda8b43de97975077e7ae6cbcdfcafbf327: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:11:30.807119 sshd[4461]: Accepted publickey for core from 68.220.241.50 port 60584 ssh2: RSA SHA256:wScRSXm5JHKrAeSxAplDhSGBmu9+62e7CgH0oSNisYE Jan 23 00:11:30.808868 sshd-session[4461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:11:30.811619 containerd[1545]: time="2026-01-23T00:11:30.811449465Z" level=info msg="CreateContainer within sandbox \"471b8e1346a3c5afb00ee082000a7e34946bbba74579043416078f85bb5150fb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ba3c2783876ed67eb78cdeb136f28bda8b43de97975077e7ae6cbcdfcafbf327\"" Jan 23 00:11:30.812718 containerd[1545]: time="2026-01-23T00:11:30.812673185Z" level=info msg="StartContainer for \"ba3c2783876ed67eb78cdeb136f28bda8b43de97975077e7ae6cbcdfcafbf327\"" Jan 23 00:11:30.817358 containerd[1545]: time="2026-01-23T00:11:30.817096824Z" level=info msg="connecting to shim ba3c2783876ed67eb78cdeb136f28bda8b43de97975077e7ae6cbcdfcafbf327" address="unix:///run/containerd/s/ed158fa4f9afcacf95b3b25274d0faeb8012e6c7256f5d8ec234351e53d13c18" protocol=ttrpc version=3 Jan 23 00:11:30.820088 systemd-logind[1529]: New session 24 of user core. Jan 23 00:11:30.826298 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 00:11:30.843449 systemd[1]: Started cri-containerd-ba3c2783876ed67eb78cdeb136f28bda8b43de97975077e7ae6cbcdfcafbf327.scope - libcontainer container ba3c2783876ed67eb78cdeb136f28bda8b43de97975077e7ae6cbcdfcafbf327. Jan 23 00:11:30.884104 containerd[1545]: time="2026-01-23T00:11:30.882395327Z" level=info msg="StartContainer for \"ba3c2783876ed67eb78cdeb136f28bda8b43de97975077e7ae6cbcdfcafbf327\" returns successfully" Jan 23 00:11:30.899800 systemd[1]: cri-containerd-ba3c2783876ed67eb78cdeb136f28bda8b43de97975077e7ae6cbcdfcafbf327.scope: Deactivated successfully. Jan 23 00:11:30.905001 containerd[1545]: time="2026-01-23T00:11:30.904865321Z" level=info msg="received container exit event container_id:\"ba3c2783876ed67eb78cdeb136f28bda8b43de97975077e7ae6cbcdfcafbf327\" id:\"ba3c2783876ed67eb78cdeb136f28bda8b43de97975077e7ae6cbcdfcafbf327\" pid:4574 exited_at:{seconds:1769127090 nanos:904498082}" Jan 23 00:11:31.244131 sshd[4572]: Connection closed by 68.220.241.50 port 60584 Jan 23 00:11:31.244427 sshd-session[4461]: pam_unix(sshd:session): session closed for user core Jan 23 00:11:31.250509 systemd-logind[1529]: Session 24 logged out. Waiting for processes to exit. Jan 23 00:11:31.250859 systemd[1]: sshd@23-188.245.94.123:22-68.220.241.50:60584.service: Deactivated successfully. Jan 23 00:11:31.253673 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 00:11:31.255733 systemd-logind[1529]: Removed session 24. Jan 23 00:11:31.356573 systemd[1]: Started sshd@24-188.245.94.123:22-68.220.241.50:60588.service - OpenSSH per-connection server daemon (68.220.241.50:60588). Jan 23 00:11:31.801424 containerd[1545]: time="2026-01-23T00:11:31.801375864Z" level=info msg="CreateContainer within sandbox \"471b8e1346a3c5afb00ee082000a7e34946bbba74579043416078f85bb5150fb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 00:11:31.821181 containerd[1545]: time="2026-01-23T00:11:31.820735701Z" level=info msg="Container 8e8c044eb2d0180b777b2d9cd32864daa4c1e2d9f5ac851b009eafe2406b1f73: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:11:31.833051 containerd[1545]: time="2026-01-23T00:11:31.832977220Z" level=info msg="CreateContainer within sandbox \"471b8e1346a3c5afb00ee082000a7e34946bbba74579043416078f85bb5150fb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8e8c044eb2d0180b777b2d9cd32864daa4c1e2d9f5ac851b009eafe2406b1f73\"" Jan 23 00:11:31.834789 containerd[1545]: time="2026-01-23T00:11:31.834649699Z" level=info msg="StartContainer for \"8e8c044eb2d0180b777b2d9cd32864daa4c1e2d9f5ac851b009eafe2406b1f73\"" Jan 23 00:11:31.838456 containerd[1545]: time="2026-01-23T00:11:31.838412259Z" level=info msg="connecting to shim 8e8c044eb2d0180b777b2d9cd32864daa4c1e2d9f5ac851b009eafe2406b1f73" address="unix:///run/containerd/s/ed158fa4f9afcacf95b3b25274d0faeb8012e6c7256f5d8ec234351e53d13c18" protocol=ttrpc version=3 Jan 23 00:11:31.860372 systemd[1]: Started cri-containerd-8e8c044eb2d0180b777b2d9cd32864daa4c1e2d9f5ac851b009eafe2406b1f73.scope - libcontainer container 8e8c044eb2d0180b777b2d9cd32864daa4c1e2d9f5ac851b009eafe2406b1f73. Jan 23 00:11:31.947049 containerd[1545]: time="2026-01-23T00:11:31.947001483Z" level=info msg="StartContainer for \"8e8c044eb2d0180b777b2d9cd32864daa4c1e2d9f5ac851b009eafe2406b1f73\" returns successfully" Jan 23 00:11:31.948160 systemd[1]: cri-containerd-8e8c044eb2d0180b777b2d9cd32864daa4c1e2d9f5ac851b009eafe2406b1f73.scope: Deactivated successfully. Jan 23 00:11:31.950575 containerd[1545]: time="2026-01-23T00:11:31.950524283Z" level=info msg="received container exit event container_id:\"8e8c044eb2d0180b777b2d9cd32864daa4c1e2d9f5ac851b009eafe2406b1f73\" id:\"8e8c044eb2d0180b777b2d9cd32864daa4c1e2d9f5ac851b009eafe2406b1f73\" pid:4629 exited_at:{seconds:1769127091 nanos:950160323}" Jan 23 00:11:31.973254 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e8c044eb2d0180b777b2d9cd32864daa4c1e2d9f5ac851b009eafe2406b1f73-rootfs.mount: Deactivated successfully. Jan 23 00:11:31.998901 sshd[4613]: Accepted publickey for core from 68.220.241.50 port 60588 ssh2: RSA SHA256:wScRSXm5JHKrAeSxAplDhSGBmu9+62e7CgH0oSNisYE Jan 23 00:11:32.000990 sshd-session[4613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:11:32.006635 systemd-logind[1529]: New session 25 of user core. Jan 23 00:11:32.012403 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 00:11:32.811876 containerd[1545]: time="2026-01-23T00:11:32.811817691Z" level=info msg="CreateContainer within sandbox \"471b8e1346a3c5afb00ee082000a7e34946bbba74579043416078f85bb5150fb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 00:11:32.825434 containerd[1545]: time="2026-01-23T00:11:32.825373131Z" level=info msg="Container 7bd3326c94b2fc52a0eb1946e7f83442544ff3ccc3081af89577b9a5a6dd7caa: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:11:32.835943 containerd[1545]: time="2026-01-23T00:11:32.835903290Z" level=info msg="CreateContainer within sandbox \"471b8e1346a3c5afb00ee082000a7e34946bbba74579043416078f85bb5150fb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7bd3326c94b2fc52a0eb1946e7f83442544ff3ccc3081af89577b9a5a6dd7caa\"" Jan 23 00:11:32.836698 containerd[1545]: time="2026-01-23T00:11:32.836674970Z" level=info msg="StartContainer for \"7bd3326c94b2fc52a0eb1946e7f83442544ff3ccc3081af89577b9a5a6dd7caa\"" Jan 23 00:11:32.837783 containerd[1545]: time="2026-01-23T00:11:32.837749290Z" level=info msg="connecting to shim 7bd3326c94b2fc52a0eb1946e7f83442544ff3ccc3081af89577b9a5a6dd7caa" address="unix:///run/containerd/s/ed158fa4f9afcacf95b3b25274d0faeb8012e6c7256f5d8ec234351e53d13c18" protocol=ttrpc version=3 Jan 23 00:11:32.865530 systemd[1]: Started cri-containerd-7bd3326c94b2fc52a0eb1946e7f83442544ff3ccc3081af89577b9a5a6dd7caa.scope - libcontainer container 7bd3326c94b2fc52a0eb1946e7f83442544ff3ccc3081af89577b9a5a6dd7caa. Jan 23 00:11:32.908674 systemd[1]: cri-containerd-7bd3326c94b2fc52a0eb1946e7f83442544ff3ccc3081af89577b9a5a6dd7caa.scope: Deactivated successfully. Jan 23 00:11:32.910809 containerd[1545]: time="2026-01-23T00:11:32.910631448Z" level=info msg="received container exit event container_id:\"7bd3326c94b2fc52a0eb1946e7f83442544ff3ccc3081af89577b9a5a6dd7caa\" id:\"7bd3326c94b2fc52a0eb1946e7f83442544ff3ccc3081af89577b9a5a6dd7caa\" pid:4674 exited_at:{seconds:1769127092 nanos:908814808}" Jan 23 00:11:32.921606 containerd[1545]: time="2026-01-23T00:11:32.921430448Z" level=info msg="StartContainer for \"7bd3326c94b2fc52a0eb1946e7f83442544ff3ccc3081af89577b9a5a6dd7caa\" returns successfully" Jan 23 00:11:32.947278 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7bd3326c94b2fc52a0eb1946e7f83442544ff3ccc3081af89577b9a5a6dd7caa-rootfs.mount: Deactivated successfully. Jan 23 00:11:33.821522 containerd[1545]: time="2026-01-23T00:11:33.820562790Z" level=info msg="CreateContainer within sandbox \"471b8e1346a3c5afb00ee082000a7e34946bbba74579043416078f85bb5150fb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 00:11:33.834752 containerd[1545]: time="2026-01-23T00:11:33.833424391Z" level=info msg="Container a87f7f075323ce80833d7c8b342f9f0059f172d2dca399edb3b4637eec5a5e7f: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:11:33.840821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2968970878.mount: Deactivated successfully. Jan 23 00:11:33.845165 containerd[1545]: time="2026-01-23T00:11:33.845129872Z" level=info msg="CreateContainer within sandbox \"471b8e1346a3c5afb00ee082000a7e34946bbba74579043416078f85bb5150fb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a87f7f075323ce80833d7c8b342f9f0059f172d2dca399edb3b4637eec5a5e7f\"" Jan 23 00:11:33.845970 containerd[1545]: time="2026-01-23T00:11:33.845647312Z" level=info msg="StartContainer for \"a87f7f075323ce80833d7c8b342f9f0059f172d2dca399edb3b4637eec5a5e7f\"" Jan 23 00:11:33.848426 containerd[1545]: time="2026-01-23T00:11:33.848289152Z" level=info msg="connecting to shim a87f7f075323ce80833d7c8b342f9f0059f172d2dca399edb3b4637eec5a5e7f" address="unix:///run/containerd/s/ed158fa4f9afcacf95b3b25274d0faeb8012e6c7256f5d8ec234351e53d13c18" protocol=ttrpc version=3 Jan 23 00:11:33.872302 systemd[1]: Started cri-containerd-a87f7f075323ce80833d7c8b342f9f0059f172d2dca399edb3b4637eec5a5e7f.scope - libcontainer container a87f7f075323ce80833d7c8b342f9f0059f172d2dca399edb3b4637eec5a5e7f. Jan 23 00:11:33.922844 containerd[1545]: time="2026-01-23T00:11:33.922789238Z" level=info msg="StartContainer for \"a87f7f075323ce80833d7c8b342f9f0059f172d2dca399edb3b4637eec5a5e7f\" returns successfully" Jan 23 00:11:34.055806 kubelet[2760]: I0123 00:11:34.054118 2760 setters.go:602] "Node became not ready" node="ci-4459-2-2-n-105ad3c88f" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T00:11:34Z","lastTransitionTime":"2026-01-23T00:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 23 00:11:34.247304 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 23 00:11:34.857722 kubelet[2760]: I0123 00:11:34.857653 2760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qg5ch" podStartSLOduration=5.857626324 podStartE2EDuration="5.857626324s" podCreationTimestamp="2026-01-23 00:11:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:11:34.856351084 +0000 UTC m=+204.816721492" watchObservedRunningTime="2026-01-23 00:11:34.857626324 +0000 UTC m=+204.817996732" Jan 23 00:11:37.183699 systemd-networkd[1418]: lxc_health: Link UP Jan 23 00:11:37.192486 systemd-networkd[1418]: lxc_health: Gained carrier Jan 23 00:11:38.785443 systemd-networkd[1418]: lxc_health: Gained IPv6LL Jan 23 00:11:43.259413 sshd[4655]: Connection closed by 68.220.241.50 port 60588 Jan 23 00:11:43.259289 sshd-session[4613]: pam_unix(sshd:session): session closed for user core Jan 23 00:11:43.266910 systemd[1]: sshd@24-188.245.94.123:22-68.220.241.50:60588.service: Deactivated successfully. Jan 23 00:11:43.271422 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 00:11:43.275228 systemd-logind[1529]: Session 25 logged out. Waiting for processes to exit. Jan 23 00:11:43.277277 systemd-logind[1529]: Removed session 25. Jan 23 00:11:58.247296 systemd[1]: cri-containerd-67884217fdce7d4d57c0179593ee273ddb7aac6a03384b4b882a72b5199b66d8.scope: Deactivated successfully. Jan 23 00:11:58.247653 systemd[1]: cri-containerd-67884217fdce7d4d57c0179593ee273ddb7aac6a03384b4b882a72b5199b66d8.scope: Consumed 5.984s CPU time, 54.2M memory peak. Jan 23 00:11:58.252900 containerd[1545]: time="2026-01-23T00:11:58.252755453Z" level=info msg="received container exit event container_id:\"67884217fdce7d4d57c0179593ee273ddb7aac6a03384b4b882a72b5199b66d8\" id:\"67884217fdce7d4d57c0179593ee273ddb7aac6a03384b4b882a72b5199b66d8\" pid:2591 exit_status:1 exited_at:{seconds:1769127118 nanos:251037609}" Jan 23 00:11:58.282156 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-67884217fdce7d4d57c0179593ee273ddb7aac6a03384b4b882a72b5199b66d8-rootfs.mount: Deactivated successfully. Jan 23 00:11:58.310116 kubelet[2760]: E0123 00:11:58.308380 2760 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:55862->10.0.0.2:2379: read: connection timed out" Jan 23 00:11:58.316284 systemd[1]: cri-containerd-d4f115c3159485a2fc584bb80ff1b8bde970c0e0edc33c90aa3730b4e51d7ec7.scope: Deactivated successfully. Jan 23 00:11:58.317155 systemd[1]: cri-containerd-d4f115c3159485a2fc584bb80ff1b8bde970c0e0edc33c90aa3730b4e51d7ec7.scope: Consumed 4.217s CPU time, 20.8M memory peak. Jan 23 00:11:58.319783 containerd[1545]: time="2026-01-23T00:11:58.319424927Z" level=info msg="received container exit event container_id:\"d4f115c3159485a2fc584bb80ff1b8bde970c0e0edc33c90aa3730b4e51d7ec7\" id:\"d4f115c3159485a2fc584bb80ff1b8bde970c0e0edc33c90aa3730b4e51d7ec7\" pid:2628 exit_status:1 exited_at:{seconds:1769127118 nanos:318863446}" Jan 23 00:11:58.341303 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d4f115c3159485a2fc584bb80ff1b8bde970c0e0edc33c90aa3730b4e51d7ec7-rootfs.mount: Deactivated successfully. Jan 23 00:11:58.891626 kubelet[2760]: I0123 00:11:58.891326 2760 scope.go:117] "RemoveContainer" containerID="d4f115c3159485a2fc584bb80ff1b8bde970c0e0edc33c90aa3730b4e51d7ec7" Jan 23 00:11:58.894331 containerd[1545]: time="2026-01-23T00:11:58.894250979Z" level=info msg="CreateContainer within sandbox \"a6dbc0e162577cfdc40b228298307ea6dee6c54935369e21ec9b443842267fc5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 23 00:11:58.895536 kubelet[2760]: I0123 00:11:58.895331 2760 scope.go:117] "RemoveContainer" containerID="67884217fdce7d4d57c0179593ee273ddb7aac6a03384b4b882a72b5199b66d8" Jan 23 00:11:58.898046 containerd[1545]: time="2026-01-23T00:11:58.897967548Z" level=info msg="CreateContainer within sandbox \"e3bcd1e18fdd4dfa0428261502716a933b7d63e868273f74837a6ce05655f183\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 23 00:11:58.907647 containerd[1545]: time="2026-01-23T00:11:58.907061089Z" level=info msg="Container b0cec263161593209e830b8de025bc4fb4bcd967f3c623e58fa782f0a5797be9: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:11:58.913124 containerd[1545]: time="2026-01-23T00:11:58.912813022Z" level=info msg="Container 1aaa5fcebe5ec20d70a3817721a5b1fa681ab663bf732a7a2df40ff67b13de0c: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:11:58.918399 containerd[1545]: time="2026-01-23T00:11:58.918359435Z" level=info msg="CreateContainer within sandbox \"a6dbc0e162577cfdc40b228298307ea6dee6c54935369e21ec9b443842267fc5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"b0cec263161593209e830b8de025bc4fb4bcd967f3c623e58fa782f0a5797be9\"" Jan 23 00:11:58.919567 containerd[1545]: time="2026-01-23T00:11:58.919544798Z" level=info msg="StartContainer for \"b0cec263161593209e830b8de025bc4fb4bcd967f3c623e58fa782f0a5797be9\"" Jan 23 00:11:58.921155 containerd[1545]: time="2026-01-23T00:11:58.921125202Z" level=info msg="connecting to shim b0cec263161593209e830b8de025bc4fb4bcd967f3c623e58fa782f0a5797be9" address="unix:///run/containerd/s/edb0ba4b75b7e1cf250330144df889027f6de3776590a95832e813adad21683f" protocol=ttrpc version=3 Jan 23 00:11:58.926504 containerd[1545]: time="2026-01-23T00:11:58.926141013Z" level=info msg="CreateContainer within sandbox \"e3bcd1e18fdd4dfa0428261502716a933b7d63e868273f74837a6ce05655f183\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"1aaa5fcebe5ec20d70a3817721a5b1fa681ab663bf732a7a2df40ff67b13de0c\"" Jan 23 00:11:58.927017 containerd[1545]: time="2026-01-23T00:11:58.926989055Z" level=info msg="StartContainer for \"1aaa5fcebe5ec20d70a3817721a5b1fa681ab663bf732a7a2df40ff67b13de0c\"" Jan 23 00:11:58.931144 containerd[1545]: time="2026-01-23T00:11:58.931111425Z" level=info msg="connecting to shim 1aaa5fcebe5ec20d70a3817721a5b1fa681ab663bf732a7a2df40ff67b13de0c" address="unix:///run/containerd/s/05427b624b87960483a7b84c1d4d200af64466c890e5e2996448b6fae24f7d74" protocol=ttrpc version=3 Jan 23 00:11:58.947258 systemd[1]: Started cri-containerd-b0cec263161593209e830b8de025bc4fb4bcd967f3c623e58fa782f0a5797be9.scope - libcontainer container b0cec263161593209e830b8de025bc4fb4bcd967f3c623e58fa782f0a5797be9. Jan 23 00:11:58.957284 systemd[1]: Started cri-containerd-1aaa5fcebe5ec20d70a3817721a5b1fa681ab663bf732a7a2df40ff67b13de0c.scope - libcontainer container 1aaa5fcebe5ec20d70a3817721a5b1fa681ab663bf732a7a2df40ff67b13de0c. Jan 23 00:11:59.007137 containerd[1545]: time="2026-01-23T00:11:59.007098641Z" level=info msg="StartContainer for \"b0cec263161593209e830b8de025bc4fb4bcd967f3c623e58fa782f0a5797be9\" returns successfully" Jan 23 00:11:59.020096 containerd[1545]: time="2026-01-23T00:11:59.020039432Z" level=info msg="StartContainer for \"1aaa5fcebe5ec20d70a3817721a5b1fa681ab663bf732a7a2df40ff67b13de0c\" returns successfully" Jan 23 00:12:02.468586 kubelet[2760]: E0123 00:12:02.467993 2760 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:55656->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4459-2-2-n-105ad3c88f.188d33bc329bf120 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4459-2-2-n-105ad3c88f,UID:e0cfe3c9b8d257530672feb004f7b876,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4459-2-2-n-105ad3c88f,},FirstTimestamp:2026-01-23 00:11:52.022421792 +0000 UTC m=+221.982792240,LastTimestamp:2026-01-23 00:11:52.022421792 +0000 UTC m=+221.982792240,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-2-n-105ad3c88f,}"