Sep 16 04:25:28.782590 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 16 04:25:28.782617 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Tue Sep 16 03:05:48 -00 2025 Sep 16 04:25:28.782628 kernel: KASLR enabled Sep 16 04:25:28.782634 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Sep 16 04:25:28.782640 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390bb018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Sep 16 04:25:28.782645 kernel: random: crng init done Sep 16 04:25:28.782652 kernel: secureboot: Secure boot disabled Sep 16 04:25:28.782658 kernel: ACPI: Early table checksum verification disabled Sep 16 04:25:28.782664 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Sep 16 04:25:28.782669 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Sep 16 04:25:28.782677 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:25:28.782683 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:25:28.782689 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:25:28.782695 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:25:28.782702 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:25:28.782709 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:25:28.782716 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:25:28.782722 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:25:28.782728 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:25:28.782734 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Sep 16 04:25:28.782740 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Sep 16 04:25:28.782746 kernel: ACPI: Use ACPI SPCR as default console: No Sep 16 04:25:28.784791 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Sep 16 04:25:28.784805 kernel: NODE_DATA(0) allocated [mem 0x13967da00-0x139684fff] Sep 16 04:25:28.784811 kernel: Zone ranges: Sep 16 04:25:28.784818 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 16 04:25:28.784829 kernel: DMA32 empty Sep 16 04:25:28.784835 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Sep 16 04:25:28.784841 kernel: Device empty Sep 16 04:25:28.784847 kernel: Movable zone start for each node Sep 16 04:25:28.784854 kernel: Early memory node ranges Sep 16 04:25:28.784860 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Sep 16 04:25:28.784866 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Sep 16 04:25:28.784872 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Sep 16 04:25:28.784878 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Sep 16 04:25:28.784884 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Sep 16 04:25:28.784890 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Sep 16 04:25:28.784897 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Sep 16 04:25:28.784904 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Sep 16 04:25:28.784911 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Sep 16 04:25:28.784919 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Sep 16 04:25:28.784935 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Sep 16 04:25:28.784942 kernel: cma: Reserved 16 MiB at 0x00000000ff000000 on node -1 Sep 16 04:25:28.784951 kernel: psci: probing for conduit method from ACPI. Sep 16 04:25:28.784957 kernel: psci: PSCIv1.1 detected in firmware. Sep 16 04:25:28.784964 kernel: psci: Using standard PSCI v0.2 function IDs Sep 16 04:25:28.784970 kernel: psci: Trusted OS migration not required Sep 16 04:25:28.784977 kernel: psci: SMC Calling Convention v1.1 Sep 16 04:25:28.784984 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 16 04:25:28.784992 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 16 04:25:28.785000 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 16 04:25:28.785007 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 16 04:25:28.785016 kernel: Detected PIPT I-cache on CPU0 Sep 16 04:25:28.785023 kernel: CPU features: detected: GIC system register CPU interface Sep 16 04:25:28.785032 kernel: CPU features: detected: Spectre-v4 Sep 16 04:25:28.785040 kernel: CPU features: detected: Spectre-BHB Sep 16 04:25:28.785049 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 16 04:25:28.785056 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 16 04:25:28.785063 kernel: CPU features: detected: ARM erratum 1418040 Sep 16 04:25:28.785070 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 16 04:25:28.785078 kernel: alternatives: applying boot alternatives Sep 16 04:25:28.785086 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=eff5cc3c399cf6fc52e3071751a09276871b099078da6d1b1a498405d04a9313 Sep 16 04:25:28.785093 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 16 04:25:28.785100 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 16 04:25:28.785108 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 16 04:25:28.785114 kernel: Fallback order for Node 0: 0 Sep 16 04:25:28.785121 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1024000 Sep 16 04:25:28.785127 kernel: Policy zone: Normal Sep 16 04:25:28.785133 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 16 04:25:28.785140 kernel: software IO TLB: area num 2. Sep 16 04:25:28.785147 kernel: software IO TLB: mapped [mem 0x00000000fb000000-0x00000000ff000000] (64MB) Sep 16 04:25:28.785153 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 16 04:25:28.785159 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 16 04:25:28.785167 kernel: rcu: RCU event tracing is enabled. Sep 16 04:25:28.785174 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 16 04:25:28.785180 kernel: Trampoline variant of Tasks RCU enabled. Sep 16 04:25:28.785188 kernel: Tracing variant of Tasks RCU enabled. Sep 16 04:25:28.785195 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 16 04:25:28.785202 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 16 04:25:28.785208 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 16 04:25:28.785215 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 16 04:25:28.785221 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 16 04:25:28.785228 kernel: GICv3: 256 SPIs implemented Sep 16 04:25:28.785234 kernel: GICv3: 0 Extended SPIs implemented Sep 16 04:25:28.785241 kernel: Root IRQ handler: gic_handle_irq Sep 16 04:25:28.785247 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 16 04:25:28.785253 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 16 04:25:28.785260 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 16 04:25:28.785268 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 16 04:25:28.785274 kernel: ITS@0x0000000008080000: allocated 8192 Devices @100100000 (indirect, esz 8, psz 64K, shr 1) Sep 16 04:25:28.785281 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @100110000 (flat, esz 8, psz 64K, shr 1) Sep 16 04:25:28.785288 kernel: GICv3: using LPI property table @0x0000000100120000 Sep 16 04:25:28.785294 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000100130000 Sep 16 04:25:28.785301 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 16 04:25:28.785307 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 16 04:25:28.785314 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 16 04:25:28.785320 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 16 04:25:28.785327 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 16 04:25:28.785333 kernel: Console: colour dummy device 80x25 Sep 16 04:25:28.785341 kernel: ACPI: Core revision 20240827 Sep 16 04:25:28.785349 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 16 04:25:28.785355 kernel: pid_max: default: 32768 minimum: 301 Sep 16 04:25:28.785362 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 16 04:25:28.785369 kernel: landlock: Up and running. Sep 16 04:25:28.785375 kernel: SELinux: Initializing. Sep 16 04:25:28.785382 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 16 04:25:28.785389 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 16 04:25:28.785395 kernel: rcu: Hierarchical SRCU implementation. Sep 16 04:25:28.785403 kernel: rcu: Max phase no-delay instances is 400. Sep 16 04:25:28.785410 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 16 04:25:28.785417 kernel: Remapping and enabling EFI services. Sep 16 04:25:28.785424 kernel: smp: Bringing up secondary CPUs ... Sep 16 04:25:28.785430 kernel: Detected PIPT I-cache on CPU1 Sep 16 04:25:28.785437 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 16 04:25:28.785444 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100140000 Sep 16 04:25:28.785451 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 16 04:25:28.785457 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 16 04:25:28.785465 kernel: smp: Brought up 1 node, 2 CPUs Sep 16 04:25:28.785477 kernel: SMP: Total of 2 processors activated. Sep 16 04:25:28.785484 kernel: CPU: All CPU(s) started at EL1 Sep 16 04:25:28.785493 kernel: CPU features: detected: 32-bit EL0 Support Sep 16 04:25:28.785500 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 16 04:25:28.785507 kernel: CPU features: detected: Common not Private translations Sep 16 04:25:28.785514 kernel: CPU features: detected: CRC32 instructions Sep 16 04:25:28.785521 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 16 04:25:28.785530 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 16 04:25:28.785537 kernel: CPU features: detected: LSE atomic instructions Sep 16 04:25:28.785543 kernel: CPU features: detected: Privileged Access Never Sep 16 04:25:28.785551 kernel: CPU features: detected: RAS Extension Support Sep 16 04:25:28.785558 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 16 04:25:28.785565 kernel: alternatives: applying system-wide alternatives Sep 16 04:25:28.785572 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Sep 16 04:25:28.785579 kernel: Memory: 3859556K/4096000K available (11136K kernel code, 2440K rwdata, 9068K rodata, 38976K init, 1038K bss, 214964K reserved, 16384K cma-reserved) Sep 16 04:25:28.785586 kernel: devtmpfs: initialized Sep 16 04:25:28.785595 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 16 04:25:28.785602 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 16 04:25:28.785609 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 16 04:25:28.785616 kernel: 0 pages in range for non-PLT usage Sep 16 04:25:28.785623 kernel: 508560 pages in range for PLT usage Sep 16 04:25:28.785630 kernel: pinctrl core: initialized pinctrl subsystem Sep 16 04:25:28.785637 kernel: SMBIOS 3.0.0 present. Sep 16 04:25:28.785644 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Sep 16 04:25:28.785651 kernel: DMI: Memory slots populated: 1/1 Sep 16 04:25:28.785660 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 16 04:25:28.785667 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 16 04:25:28.785675 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 16 04:25:28.785682 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 16 04:25:28.785689 kernel: audit: initializing netlink subsys (disabled) Sep 16 04:25:28.785696 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Sep 16 04:25:28.785703 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 16 04:25:28.785710 kernel: cpuidle: using governor menu Sep 16 04:25:28.785717 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 16 04:25:28.785725 kernel: ASID allocator initialised with 32768 entries Sep 16 04:25:28.785732 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 16 04:25:28.785739 kernel: Serial: AMBA PL011 UART driver Sep 16 04:25:28.785746 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 16 04:25:28.785780 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 16 04:25:28.785788 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 16 04:25:28.785795 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 16 04:25:28.785802 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 16 04:25:28.785809 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 16 04:25:28.785819 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 16 04:25:28.785827 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 16 04:25:28.785834 kernel: ACPI: Added _OSI(Module Device) Sep 16 04:25:28.785841 kernel: ACPI: Added _OSI(Processor Device) Sep 16 04:25:28.785848 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 16 04:25:28.785855 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 16 04:25:28.785862 kernel: ACPI: Interpreter enabled Sep 16 04:25:28.785869 kernel: ACPI: Using GIC for interrupt routing Sep 16 04:25:28.785876 kernel: ACPI: MCFG table detected, 1 entries Sep 16 04:25:28.785885 kernel: ACPI: CPU0 has been hot-added Sep 16 04:25:28.785892 kernel: ACPI: CPU1 has been hot-added Sep 16 04:25:28.785899 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 16 04:25:28.785907 kernel: printk: legacy console [ttyAMA0] enabled Sep 16 04:25:28.785914 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 16 04:25:28.786115 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 16 04:25:28.786195 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 16 04:25:28.786257 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 16 04:25:28.786314 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 16 04:25:28.786371 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 16 04:25:28.786381 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 16 04:25:28.786388 kernel: PCI host bridge to bus 0000:00 Sep 16 04:25:28.786455 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 16 04:25:28.786508 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 16 04:25:28.786560 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 16 04:25:28.786613 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 16 04:25:28.786689 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 16 04:25:28.787824 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 conventional PCI endpoint Sep 16 04:25:28.787910 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11289000-0x11289fff] Sep 16 04:25:28.788021 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref] Sep 16 04:25:28.788096 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 16 04:25:28.788163 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11288000-0x11288fff] Sep 16 04:25:28.788223 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Sep 16 04:25:28.788282 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff] Sep 16 04:25:28.788343 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80000fffff 64bit pref] Sep 16 04:25:28.788416 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 16 04:25:28.788475 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11287000-0x11287fff] Sep 16 04:25:28.788534 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Sep 16 04:25:28.788599 kernel: pci 0000:00:02.1: bridge window [mem 0x10e00000-0x10ffffff] Sep 16 04:25:28.788664 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 16 04:25:28.788724 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11286000-0x11286fff] Sep 16 04:25:28.788801 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Sep 16 04:25:28.788861 kernel: pci 0000:00:02.2: bridge window [mem 0x10c00000-0x10dfffff] Sep 16 04:25:28.788918 kernel: pci 0000:00:02.2: bridge window [mem 0x8000100000-0x80001fffff 64bit pref] Sep 16 04:25:28.788999 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 16 04:25:28.789063 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11285000-0x11285fff] Sep 16 04:25:28.789122 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Sep 16 04:25:28.789179 kernel: pci 0000:00:02.3: bridge window [mem 0x10a00000-0x10bfffff] Sep 16 04:25:28.789236 kernel: pci 0000:00:02.3: bridge window [mem 0x8000200000-0x80002fffff 64bit pref] Sep 16 04:25:28.789303 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 16 04:25:28.789364 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11284000-0x11284fff] Sep 16 04:25:28.789424 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Sep 16 04:25:28.789487 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Sep 16 04:25:28.789546 kernel: pci 0000:00:02.4: bridge window [mem 0x8000300000-0x80003fffff 64bit pref] Sep 16 04:25:28.789614 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 16 04:25:28.789673 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11283000-0x11283fff] Sep 16 04:25:28.789731 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Sep 16 04:25:28.791746 kernel: pci 0000:00:02.5: bridge window [mem 0x10600000-0x107fffff] Sep 16 04:25:28.791854 kernel: pci 0000:00:02.5: bridge window [mem 0x8000400000-0x80004fffff 64bit pref] Sep 16 04:25:28.791960 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 16 04:25:28.792030 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11282000-0x11282fff] Sep 16 04:25:28.792107 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Sep 16 04:25:28.792170 kernel: pci 0000:00:02.6: bridge window [mem 0x10400000-0x105fffff] Sep 16 04:25:28.792229 kernel: pci 0000:00:02.6: bridge window [mem 0x8000500000-0x80005fffff 64bit pref] Sep 16 04:25:28.792299 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 16 04:25:28.792360 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11281000-0x11281fff] Sep 16 04:25:28.792422 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Sep 16 04:25:28.792481 kernel: pci 0000:00:02.7: bridge window [mem 0x10200000-0x103fffff] Sep 16 04:25:28.792547 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 16 04:25:28.792606 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11280000-0x11280fff] Sep 16 04:25:28.792664 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Sep 16 04:25:28.792721 kernel: pci 0000:00:03.0: bridge window [mem 0x10000000-0x101fffff] Sep 16 04:25:28.792803 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 conventional PCI endpoint Sep 16 04:25:28.792864 kernel: pci 0000:00:04.0: BAR 0 [io 0x0000-0x0007] Sep 16 04:25:28.792952 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Sep 16 04:25:28.793018 kernel: pci 0000:01:00.0: BAR 1 [mem 0x11000000-0x11000fff] Sep 16 04:25:28.793079 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 16 04:25:28.793138 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref] Sep 16 04:25:28.793206 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Sep 16 04:25:28.793270 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10e00000-0x10e03fff 64bit] Sep 16 04:25:28.793339 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 PCIe Endpoint Sep 16 04:25:28.793400 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10c00000-0x10c00fff] Sep 16 04:25:28.793461 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000100000-0x8000103fff 64bit pref] Sep 16 04:25:28.793531 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Sep 16 04:25:28.793605 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000200000-0x8000203fff 64bit pref] Sep 16 04:25:28.793699 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Sep 16 04:25:28.793849 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000300000-0x8000303fff 64bit pref] Sep 16 04:25:28.794085 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 PCIe Endpoint Sep 16 04:25:28.794172 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10600000-0x10600fff] Sep 16 04:25:28.794237 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref] Sep 16 04:25:28.794305 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Sep 16 04:25:28.794367 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10400000-0x10400fff] Sep 16 04:25:28.794436 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000500000-0x8000503fff 64bit pref] Sep 16 04:25:28.794495 kernel: pci 0000:07:00.0: ROM [mem 0xfff80000-0xffffffff pref] Sep 16 04:25:28.794558 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Sep 16 04:25:28.794616 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Sep 16 04:25:28.794674 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Sep 16 04:25:28.794738 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Sep 16 04:25:28.794839 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Sep 16 04:25:28.794933 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Sep 16 04:25:28.795009 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Sep 16 04:25:28.795069 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Sep 16 04:25:28.795126 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Sep 16 04:25:28.795207 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Sep 16 04:25:28.795295 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Sep 16 04:25:28.795360 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Sep 16 04:25:28.795434 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Sep 16 04:25:28.795495 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Sep 16 04:25:28.795554 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Sep 16 04:25:28.795615 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Sep 16 04:25:28.795676 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Sep 16 04:25:28.795734 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Sep 16 04:25:28.795819 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 16 04:25:28.795881 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Sep 16 04:25:28.795977 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Sep 16 04:25:28.796047 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 16 04:25:28.796106 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Sep 16 04:25:28.796164 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Sep 16 04:25:28.796226 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 16 04:25:28.796288 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Sep 16 04:25:28.796345 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Sep 16 04:25:28.796404 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff]: assigned Sep 16 04:25:28.796462 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref]: assigned Sep 16 04:25:28.796520 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff]: assigned Sep 16 04:25:28.796578 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref]: assigned Sep 16 04:25:28.796653 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff]: assigned Sep 16 04:25:28.797669 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref]: assigned Sep 16 04:25:28.797798 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff]: assigned Sep 16 04:25:28.798443 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref]: assigned Sep 16 04:25:28.798521 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff]: assigned Sep 16 04:25:28.798580 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref]: assigned Sep 16 04:25:28.798642 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff]: assigned Sep 16 04:25:28.798700 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref]: assigned Sep 16 04:25:28.798788 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff]: assigned Sep 16 04:25:28.798864 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref]: assigned Sep 16 04:25:28.798968 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff]: assigned Sep 16 04:25:28.799040 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref]: assigned Sep 16 04:25:28.799103 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff]: assigned Sep 16 04:25:28.799162 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref]: assigned Sep 16 04:25:28.799226 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8001200000-0x8001203fff 64bit pref]: assigned Sep 16 04:25:28.799312 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11200000-0x11200fff]: assigned Sep 16 04:25:28.799381 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11201000-0x11201fff]: assigned Sep 16 04:25:28.799452 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Sep 16 04:25:28.799518 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11202000-0x11202fff]: assigned Sep 16 04:25:28.799581 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Sep 16 04:25:28.799645 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11203000-0x11203fff]: assigned Sep 16 04:25:28.799710 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Sep 16 04:25:28.800320 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11204000-0x11204fff]: assigned Sep 16 04:25:28.800400 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Sep 16 04:25:28.800464 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11205000-0x11205fff]: assigned Sep 16 04:25:28.800526 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Sep 16 04:25:28.800590 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11206000-0x11206fff]: assigned Sep 16 04:25:28.800656 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Sep 16 04:25:28.800731 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11207000-0x11207fff]: assigned Sep 16 04:25:28.800827 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Sep 16 04:25:28.800894 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11208000-0x11208fff]: assigned Sep 16 04:25:28.801013 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Sep 16 04:25:28.801085 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11209000-0x11209fff]: assigned Sep 16 04:25:28.801148 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff]: assigned Sep 16 04:25:28.801215 kernel: pci 0000:00:04.0: BAR 0 [io 0xa000-0xa007]: assigned Sep 16 04:25:28.801286 kernel: pci 0000:01:00.0: ROM [mem 0x10000000-0x1007ffff pref]: assigned Sep 16 04:25:28.801350 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 16 04:25:28.801420 kernel: pci 0000:01:00.0: BAR 1 [mem 0x10080000-0x10080fff]: assigned Sep 16 04:25:28.801487 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Sep 16 04:25:28.801549 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Sep 16 04:25:28.801610 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Sep 16 04:25:28.801701 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Sep 16 04:25:28.803975 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10200000-0x10203fff 64bit]: assigned Sep 16 04:25:28.804071 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Sep 16 04:25:28.804152 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Sep 16 04:25:28.804219 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Sep 16 04:25:28.804283 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Sep 16 04:25:28.804358 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref]: assigned Sep 16 04:25:28.804426 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10400000-0x10400fff]: assigned Sep 16 04:25:28.804488 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Sep 16 04:25:28.804546 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Sep 16 04:25:28.804607 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Sep 16 04:25:28.804664 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Sep 16 04:25:28.804732 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref]: assigned Sep 16 04:25:28.804822 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Sep 16 04:25:28.804882 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Sep 16 04:25:28.804954 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Sep 16 04:25:28.805015 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Sep 16 04:25:28.805085 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000800000-0x8000803fff 64bit pref]: assigned Sep 16 04:25:28.805146 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Sep 16 04:25:28.805204 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Sep 16 04:25:28.805262 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Sep 16 04:25:28.805320 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Sep 16 04:25:28.805385 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000a00000-0x8000a03fff 64bit pref]: assigned Sep 16 04:25:28.805450 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10a00000-0x10a00fff]: assigned Sep 16 04:25:28.805515 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Sep 16 04:25:28.805577 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Sep 16 04:25:28.805646 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Sep 16 04:25:28.805703 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 16 04:25:28.806971 kernel: pci 0000:07:00.0: ROM [mem 0x10c00000-0x10c7ffff pref]: assigned Sep 16 04:25:28.807069 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000c00000-0x8000c03fff 64bit pref]: assigned Sep 16 04:25:28.807132 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10c80000-0x10c80fff]: assigned Sep 16 04:25:28.807205 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Sep 16 04:25:28.807270 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Sep 16 04:25:28.807333 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Sep 16 04:25:28.807392 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 16 04:25:28.807458 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Sep 16 04:25:28.807516 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Sep 16 04:25:28.807574 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Sep 16 04:25:28.807631 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 16 04:25:28.807694 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Sep 16 04:25:28.807767 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Sep 16 04:25:28.807834 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Sep 16 04:25:28.807897 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Sep 16 04:25:28.808000 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 16 04:25:28.808058 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 16 04:25:28.808111 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 16 04:25:28.808181 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Sep 16 04:25:28.808238 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Sep 16 04:25:28.808302 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Sep 16 04:25:28.808370 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Sep 16 04:25:28.808425 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Sep 16 04:25:28.808479 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Sep 16 04:25:28.808545 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Sep 16 04:25:28.808600 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Sep 16 04:25:28.808653 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Sep 16 04:25:28.808719 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Sep 16 04:25:28.810812 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Sep 16 04:25:28.810886 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Sep 16 04:25:28.810972 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Sep 16 04:25:28.811031 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Sep 16 04:25:28.811085 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Sep 16 04:25:28.811150 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Sep 16 04:25:28.811215 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Sep 16 04:25:28.811270 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 16 04:25:28.811335 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Sep 16 04:25:28.811392 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Sep 16 04:25:28.811447 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 16 04:25:28.811509 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Sep 16 04:25:28.811567 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Sep 16 04:25:28.811622 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 16 04:25:28.811685 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Sep 16 04:25:28.811741 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Sep 16 04:25:28.811825 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Sep 16 04:25:28.811836 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 16 04:25:28.811844 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 16 04:25:28.811851 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 16 04:25:28.811862 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 16 04:25:28.811869 kernel: iommu: Default domain type: Translated Sep 16 04:25:28.811876 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 16 04:25:28.811886 kernel: efivars: Registered efivars operations Sep 16 04:25:28.811895 kernel: vgaarb: loaded Sep 16 04:25:28.811903 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 16 04:25:28.811912 kernel: VFS: Disk quotas dquot_6.6.0 Sep 16 04:25:28.811921 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 16 04:25:28.811976 kernel: pnp: PnP ACPI init Sep 16 04:25:28.812068 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 16 04:25:28.812080 kernel: pnp: PnP ACPI: found 1 devices Sep 16 04:25:28.812088 kernel: NET: Registered PF_INET protocol family Sep 16 04:25:28.812095 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 16 04:25:28.812102 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 16 04:25:28.812110 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 16 04:25:28.812118 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 16 04:25:28.812126 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 16 04:25:28.812135 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 16 04:25:28.812144 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 16 04:25:28.812151 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 16 04:25:28.812159 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 16 04:25:28.812227 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Sep 16 04:25:28.812238 kernel: PCI: CLS 0 bytes, default 64 Sep 16 04:25:28.812245 kernel: kvm [1]: HYP mode not available Sep 16 04:25:28.812253 kernel: Initialise system trusted keyrings Sep 16 04:25:28.812260 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 16 04:25:28.812269 kernel: Key type asymmetric registered Sep 16 04:25:28.812276 kernel: Asymmetric key parser 'x509' registered Sep 16 04:25:28.812284 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 16 04:25:28.812291 kernel: io scheduler mq-deadline registered Sep 16 04:25:28.812299 kernel: io scheduler kyber registered Sep 16 04:25:28.812307 kernel: io scheduler bfq registered Sep 16 04:25:28.812315 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 16 04:25:28.812380 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Sep 16 04:25:28.812440 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Sep 16 04:25:28.812503 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 16 04:25:28.812565 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Sep 16 04:25:28.812624 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Sep 16 04:25:28.812683 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 16 04:25:28.812745 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Sep 16 04:25:28.815328 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Sep 16 04:25:28.815394 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 16 04:25:28.815458 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Sep 16 04:25:28.815524 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Sep 16 04:25:28.815582 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 16 04:25:28.815644 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Sep 16 04:25:28.815706 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Sep 16 04:25:28.816881 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 16 04:25:28.817016 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Sep 16 04:25:28.817085 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Sep 16 04:25:28.817154 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 16 04:25:28.817223 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Sep 16 04:25:28.817285 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Sep 16 04:25:28.817348 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 16 04:25:28.817435 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Sep 16 04:25:28.817500 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Sep 16 04:25:28.817559 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 16 04:25:28.817570 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Sep 16 04:25:28.817633 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Sep 16 04:25:28.817693 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Sep 16 04:25:28.819815 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 16 04:25:28.819856 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 16 04:25:28.819867 kernel: ACPI: button: Power Button [PWRB] Sep 16 04:25:28.819876 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 16 04:25:28.820040 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Sep 16 04:25:28.820132 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Sep 16 04:25:28.820154 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 16 04:25:28.820164 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 16 04:25:28.820239 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Sep 16 04:25:28.820252 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Sep 16 04:25:28.820261 kernel: thunder_xcv, ver 1.0 Sep 16 04:25:28.820270 kernel: thunder_bgx, ver 1.0 Sep 16 04:25:28.820278 kernel: nicpf, ver 1.0 Sep 16 04:25:28.820287 kernel: nicvf, ver 1.0 Sep 16 04:25:28.820372 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 16 04:25:28.820442 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-16T04:25:28 UTC (1757996728) Sep 16 04:25:28.820454 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 16 04:25:28.820463 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 16 04:25:28.820472 kernel: watchdog: NMI not fully supported Sep 16 04:25:28.820480 kernel: watchdog: Hard watchdog permanently disabled Sep 16 04:25:28.820489 kernel: NET: Registered PF_INET6 protocol family Sep 16 04:25:28.820498 kernel: Segment Routing with IPv6 Sep 16 04:25:28.820506 kernel: In-situ OAM (IOAM) with IPv6 Sep 16 04:25:28.820517 kernel: NET: Registered PF_PACKET protocol family Sep 16 04:25:28.820526 kernel: Key type dns_resolver registered Sep 16 04:25:28.820535 kernel: registered taskstats version 1 Sep 16 04:25:28.820544 kernel: Loading compiled-in X.509 certificates Sep 16 04:25:28.820553 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: 99eb88579c3d58869b2224a85ec8efa5647af805' Sep 16 04:25:28.820562 kernel: Demotion targets for Node 0: null Sep 16 04:25:28.820570 kernel: Key type .fscrypt registered Sep 16 04:25:28.820579 kernel: Key type fscrypt-provisioning registered Sep 16 04:25:28.820587 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 16 04:25:28.820598 kernel: ima: Allocated hash algorithm: sha1 Sep 16 04:25:28.820606 kernel: ima: No architecture policies found Sep 16 04:25:28.820615 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 16 04:25:28.820624 kernel: clk: Disabling unused clocks Sep 16 04:25:28.820632 kernel: PM: genpd: Disabling unused power domains Sep 16 04:25:28.820641 kernel: Warning: unable to open an initial console. Sep 16 04:25:28.820650 kernel: Freeing unused kernel memory: 38976K Sep 16 04:25:28.820659 kernel: Run /init as init process Sep 16 04:25:28.820667 kernel: with arguments: Sep 16 04:25:28.820677 kernel: /init Sep 16 04:25:28.820686 kernel: with environment: Sep 16 04:25:28.820694 kernel: HOME=/ Sep 16 04:25:28.820703 kernel: TERM=linux Sep 16 04:25:28.820712 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 16 04:25:28.820721 systemd[1]: Successfully made /usr/ read-only. Sep 16 04:25:28.820734 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 16 04:25:28.820744 systemd[1]: Detected virtualization kvm. Sep 16 04:25:28.820788 systemd[1]: Detected architecture arm64. Sep 16 04:25:28.820800 systemd[1]: Running in initrd. Sep 16 04:25:28.820809 systemd[1]: No hostname configured, using default hostname. Sep 16 04:25:28.820818 systemd[1]: Hostname set to . Sep 16 04:25:28.820831 systemd[1]: Initializing machine ID from VM UUID. Sep 16 04:25:28.820840 systemd[1]: Queued start job for default target initrd.target. Sep 16 04:25:28.820849 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 04:25:28.820859 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 04:25:28.820870 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 16 04:25:28.820880 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 16 04:25:28.820889 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 16 04:25:28.820899 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 16 04:25:28.820910 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 16 04:25:28.820919 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 16 04:25:28.820972 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 04:25:28.820986 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 16 04:25:28.820995 systemd[1]: Reached target paths.target - Path Units. Sep 16 04:25:28.821004 systemd[1]: Reached target slices.target - Slice Units. Sep 16 04:25:28.821013 systemd[1]: Reached target swap.target - Swaps. Sep 16 04:25:28.821022 systemd[1]: Reached target timers.target - Timer Units. Sep 16 04:25:28.821031 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 16 04:25:28.821041 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 16 04:25:28.821050 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 16 04:25:28.821060 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 16 04:25:28.821070 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 16 04:25:28.821080 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 16 04:25:28.821089 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 04:25:28.821098 systemd[1]: Reached target sockets.target - Socket Units. Sep 16 04:25:28.821107 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 16 04:25:28.821116 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 16 04:25:28.821125 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 16 04:25:28.821135 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 16 04:25:28.821161 systemd[1]: Starting systemd-fsck-usr.service... Sep 16 04:25:28.821171 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 16 04:25:28.821181 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 16 04:25:28.821190 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:25:28.821199 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 16 04:25:28.821211 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 04:25:28.821220 systemd[1]: Finished systemd-fsck-usr.service. Sep 16 04:25:28.821262 systemd-journald[245]: Collecting audit messages is disabled. Sep 16 04:25:28.821288 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 16 04:25:28.821298 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 16 04:25:28.821308 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 16 04:25:28.821317 kernel: Bridge firewalling registered Sep 16 04:25:28.821326 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 16 04:25:28.821336 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 16 04:25:28.821345 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:25:28.821355 systemd-journald[245]: Journal started Sep 16 04:25:28.821377 systemd-journald[245]: Runtime Journal (/run/log/journal/2b2be05848b645f98b73bbf7373403fb) is 8M, max 76.5M, 68.5M free. Sep 16 04:25:28.794099 systemd-modules-load[246]: Inserted module 'overlay' Sep 16 04:25:28.815688 systemd-modules-load[246]: Inserted module 'br_netfilter' Sep 16 04:25:28.826452 systemd[1]: Started systemd-journald.service - Journal Service. Sep 16 04:25:28.829873 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 16 04:25:28.833312 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 04:25:28.838999 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 16 04:25:28.842909 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 04:25:28.858410 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 16 04:25:28.859326 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:25:28.860174 systemd-tmpfiles[268]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 16 04:25:28.865088 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 04:25:28.869495 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 16 04:25:28.872187 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 16 04:25:28.905133 dracut-cmdline[284]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=eff5cc3c399cf6fc52e3071751a09276871b099078da6d1b1a498405d04a9313 Sep 16 04:25:28.920875 systemd-resolved[285]: Positive Trust Anchors: Sep 16 04:25:28.920892 systemd-resolved[285]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 16 04:25:28.920967 systemd-resolved[285]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 16 04:25:28.927162 systemd-resolved[285]: Defaulting to hostname 'linux'. Sep 16 04:25:28.929474 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 16 04:25:28.930270 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 16 04:25:29.008786 kernel: SCSI subsystem initialized Sep 16 04:25:29.013784 kernel: Loading iSCSI transport class v2.0-870. Sep 16 04:25:29.021785 kernel: iscsi: registered transport (tcp) Sep 16 04:25:29.034785 kernel: iscsi: registered transport (qla4xxx) Sep 16 04:25:29.034852 kernel: QLogic iSCSI HBA Driver Sep 16 04:25:29.054188 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 16 04:25:29.078516 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 04:25:29.085326 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 16 04:25:29.136611 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 16 04:25:29.139361 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 16 04:25:29.215812 kernel: raid6: neonx8 gen() 15590 MB/s Sep 16 04:25:29.232785 kernel: raid6: neonx4 gen() 15719 MB/s Sep 16 04:25:29.249787 kernel: raid6: neonx2 gen() 13047 MB/s Sep 16 04:25:29.266816 kernel: raid6: neonx1 gen() 10368 MB/s Sep 16 04:25:29.283813 kernel: raid6: int64x8 gen() 6864 MB/s Sep 16 04:25:29.300785 kernel: raid6: int64x4 gen() 7268 MB/s Sep 16 04:25:29.317804 kernel: raid6: int64x2 gen() 6043 MB/s Sep 16 04:25:29.334837 kernel: raid6: int64x1 gen() 5006 MB/s Sep 16 04:25:29.334937 kernel: raid6: using algorithm neonx4 gen() 15719 MB/s Sep 16 04:25:29.351823 kernel: raid6: .... xor() 12273 MB/s, rmw enabled Sep 16 04:25:29.351885 kernel: raid6: using neon recovery algorithm Sep 16 04:25:29.357063 kernel: xor: measuring software checksum speed Sep 16 04:25:29.357129 kernel: 8regs : 21596 MB/sec Sep 16 04:25:29.357153 kernel: 32regs : 21687 MB/sec Sep 16 04:25:29.357174 kernel: arm64_neon : 28070 MB/sec Sep 16 04:25:29.357799 kernel: xor: using function: arm64_neon (28070 MB/sec) Sep 16 04:25:29.412819 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 16 04:25:29.421834 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 16 04:25:29.429444 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 04:25:29.471366 systemd-udevd[493]: Using default interface naming scheme 'v255'. Sep 16 04:25:29.475704 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 04:25:29.480528 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 16 04:25:29.509252 dracut-pre-trigger[501]: rd.md=0: removing MD RAID activation Sep 16 04:25:29.538332 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 16 04:25:29.541584 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 16 04:25:29.602948 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 04:25:29.606258 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 16 04:25:29.694798 kernel: virtio_scsi virtio5: 2/0/0 default/read/poll queues Sep 16 04:25:29.702845 kernel: scsi host0: Virtio SCSI HBA Sep 16 04:25:29.707780 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 16 04:25:29.707874 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Sep 16 04:25:29.731275 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:25:29.731388 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:25:29.736012 kernel: ACPI: bus type USB registered Sep 16 04:25:29.735387 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:25:29.738407 kernel: usbcore: registered new interface driver usbfs Sep 16 04:25:29.738442 kernel: usbcore: registered new interface driver hub Sep 16 04:25:29.738452 kernel: usbcore: registered new device driver usb Sep 16 04:25:29.739023 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:25:29.752848 kernel: sr 0:0:0:0: Power-on or device reset occurred Sep 16 04:25:29.753957 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Sep 16 04:25:29.754107 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 16 04:25:29.757783 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Sep 16 04:25:29.758017 kernel: sd 0:0:0:1: Power-on or device reset occurred Sep 16 04:25:29.758122 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Sep 16 04:25:29.758821 kernel: sd 0:0:0:1: [sda] Write Protect is off Sep 16 04:25:29.758967 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Sep 16 04:25:29.759048 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 16 04:25:29.769068 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 16 04:25:29.769123 kernel: GPT:17805311 != 80003071 Sep 16 04:25:29.769135 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 16 04:25:29.769146 kernel: GPT:17805311 != 80003071 Sep 16 04:25:29.769798 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 16 04:25:29.769812 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 16 04:25:29.772782 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Sep 16 04:25:29.776811 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:25:29.780025 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 16 04:25:29.780224 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Sep 16 04:25:29.782781 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Sep 16 04:25:29.784436 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 16 04:25:29.784591 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Sep 16 04:25:29.784678 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Sep 16 04:25:29.786801 kernel: hub 1-0:1.0: USB hub found Sep 16 04:25:29.787795 kernel: hub 1-0:1.0: 4 ports detected Sep 16 04:25:29.789780 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Sep 16 04:25:29.792089 kernel: hub 2-0:1.0: USB hub found Sep 16 04:25:29.792282 kernel: hub 2-0:1.0: 4 ports detected Sep 16 04:25:29.841982 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 16 04:25:29.849963 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Sep 16 04:25:29.852083 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Sep 16 04:25:29.865072 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Sep 16 04:25:29.875011 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Sep 16 04:25:29.880517 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 16 04:25:29.882249 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 16 04:25:29.884252 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 16 04:25:29.885828 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 04:25:29.886506 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 16 04:25:29.891029 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 16 04:25:29.896880 disk-uuid[598]: Primary Header is updated. Sep 16 04:25:29.896880 disk-uuid[598]: Secondary Entries is updated. Sep 16 04:25:29.896880 disk-uuid[598]: Secondary Header is updated. Sep 16 04:25:29.907772 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 16 04:25:29.918855 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 16 04:25:29.922807 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 16 04:25:30.030795 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Sep 16 04:25:30.164974 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Sep 16 04:25:30.165066 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Sep 16 04:25:30.165398 kernel: usbcore: registered new interface driver usbhid Sep 16 04:25:30.165426 kernel: usbhid: USB HID core driver Sep 16 04:25:30.269850 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Sep 16 04:25:30.397822 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Sep 16 04:25:30.449819 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Sep 16 04:25:30.925853 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 16 04:25:30.925923 disk-uuid[599]: The operation has completed successfully. Sep 16 04:25:30.982866 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 16 04:25:30.984401 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 16 04:25:31.007953 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 16 04:25:31.030441 sh[623]: Success Sep 16 04:25:31.047233 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 16 04:25:31.047317 kernel: device-mapper: uevent: version 1.0.3 Sep 16 04:25:31.047345 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 16 04:25:31.057799 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 16 04:25:31.109561 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 16 04:25:31.113382 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 16 04:25:31.126058 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 16 04:25:31.136870 kernel: BTRFS: device fsid 782b6948-7aaa-439e-9946-c8fdb4d8f287 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (635) Sep 16 04:25:31.140589 kernel: BTRFS info (device dm-0): first mount of filesystem 782b6948-7aaa-439e-9946-c8fdb4d8f287 Sep 16 04:25:31.140668 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 16 04:25:31.148824 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 16 04:25:31.148912 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 16 04:25:31.148927 kernel: BTRFS info (device dm-0): enabling free space tree Sep 16 04:25:31.151321 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 16 04:25:31.152069 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 16 04:25:31.153081 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 16 04:25:31.153902 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 16 04:25:31.157673 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 16 04:25:31.190778 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (664) Sep 16 04:25:31.193801 kernel: BTRFS info (device sda6): first mount of filesystem a546938e-7af2-44ea-b88d-218d567c463b Sep 16 04:25:31.193920 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 16 04:25:31.199032 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 16 04:25:31.199094 kernel: BTRFS info (device sda6): turning on async discard Sep 16 04:25:31.199105 kernel: BTRFS info (device sda6): enabling free space tree Sep 16 04:25:31.205042 kernel: BTRFS info (device sda6): last unmount of filesystem a546938e-7af2-44ea-b88d-218d567c463b Sep 16 04:25:31.207459 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 16 04:25:31.210106 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 16 04:25:31.319688 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 16 04:25:31.330566 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 16 04:25:31.365270 ignition[719]: Ignition 2.22.0 Sep 16 04:25:31.365286 ignition[719]: Stage: fetch-offline Sep 16 04:25:31.365320 ignition[719]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:25:31.365327 ignition[719]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 16 04:25:31.365405 ignition[719]: parsed url from cmdline: "" Sep 16 04:25:31.365408 ignition[719]: no config URL provided Sep 16 04:25:31.365413 ignition[719]: reading system config file "/usr/lib/ignition/user.ign" Sep 16 04:25:31.369074 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 16 04:25:31.365420 ignition[719]: no config at "/usr/lib/ignition/user.ign" Sep 16 04:25:31.365424 ignition[719]: failed to fetch config: resource requires networking Sep 16 04:25:31.365671 ignition[719]: Ignition finished successfully Sep 16 04:25:31.377215 systemd-networkd[811]: lo: Link UP Sep 16 04:25:31.377226 systemd-networkd[811]: lo: Gained carrier Sep 16 04:25:31.378724 systemd-networkd[811]: Enumeration completed Sep 16 04:25:31.378912 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 16 04:25:31.379570 systemd[1]: Reached target network.target - Network. Sep 16 04:25:31.381223 systemd-networkd[811]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:25:31.381226 systemd-networkd[811]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 04:25:31.381608 systemd-networkd[811]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:25:31.381611 systemd-networkd[811]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 04:25:31.382091 systemd-networkd[811]: eth0: Link UP Sep 16 04:25:31.382173 systemd-networkd[811]: eth1: Link UP Sep 16 04:25:31.382601 systemd-networkd[811]: eth0: Gained carrier Sep 16 04:25:31.382610 systemd-networkd[811]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:25:31.384228 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 16 04:25:31.387337 systemd-networkd[811]: eth1: Gained carrier Sep 16 04:25:31.387357 systemd-networkd[811]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:25:31.414947 systemd-networkd[811]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 16 04:25:31.415840 ignition[816]: Ignition 2.22.0 Sep 16 04:25:31.415847 ignition[816]: Stage: fetch Sep 16 04:25:31.416028 ignition[816]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:25:31.416788 ignition[816]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 16 04:25:31.416893 ignition[816]: parsed url from cmdline: "" Sep 16 04:25:31.416897 ignition[816]: no config URL provided Sep 16 04:25:31.416904 ignition[816]: reading system config file "/usr/lib/ignition/user.ign" Sep 16 04:25:31.416914 ignition[816]: no config at "/usr/lib/ignition/user.ign" Sep 16 04:25:31.416942 ignition[816]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Sep 16 04:25:31.417296 ignition[816]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Sep 16 04:25:31.438898 systemd-networkd[811]: eth0: DHCPv4 address 138.199.234.3/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 16 04:25:31.618112 ignition[816]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Sep 16 04:25:31.623446 ignition[816]: GET result: OK Sep 16 04:25:31.623583 ignition[816]: parsing config with SHA512: 3323a6b588ae40dfb6e4ded3e3021e0ce484201fa7df6afef026a66558538a54fb13eff1ad62d738c87330e2d15ea863925ede867b50fd2b3685252efcc733c8 Sep 16 04:25:31.628980 unknown[816]: fetched base config from "system" Sep 16 04:25:31.629317 ignition[816]: fetch: fetch complete Sep 16 04:25:31.628993 unknown[816]: fetched base config from "system" Sep 16 04:25:31.629322 ignition[816]: fetch: fetch passed Sep 16 04:25:31.628998 unknown[816]: fetched user config from "hetzner" Sep 16 04:25:31.629370 ignition[816]: Ignition finished successfully Sep 16 04:25:31.631162 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 16 04:25:31.633861 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 16 04:25:31.673284 ignition[824]: Ignition 2.22.0 Sep 16 04:25:31.673302 ignition[824]: Stage: kargs Sep 16 04:25:31.673439 ignition[824]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:25:31.673448 ignition[824]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 16 04:25:31.678074 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 16 04:25:31.674372 ignition[824]: kargs: kargs passed Sep 16 04:25:31.674422 ignition[824]: Ignition finished successfully Sep 16 04:25:31.681737 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 16 04:25:31.712720 ignition[831]: Ignition 2.22.0 Sep 16 04:25:31.712739 ignition[831]: Stage: disks Sep 16 04:25:31.712907 ignition[831]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:25:31.712917 ignition[831]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 16 04:25:31.713661 ignition[831]: disks: disks passed Sep 16 04:25:31.713707 ignition[831]: Ignition finished successfully Sep 16 04:25:31.717038 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 16 04:25:31.717967 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 16 04:25:31.718959 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 16 04:25:31.720214 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 16 04:25:31.721936 systemd[1]: Reached target sysinit.target - System Initialization. Sep 16 04:25:31.723411 systemd[1]: Reached target basic.target - Basic System. Sep 16 04:25:31.724871 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 16 04:25:31.756616 systemd-fsck[840]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Sep 16 04:25:31.761096 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 16 04:25:31.763430 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 16 04:25:31.847799 kernel: EXT4-fs (sda9): mounted filesystem a00d22d9-68b1-4a84-acfc-9fae1fca53dd r/w with ordered data mode. Quota mode: none. Sep 16 04:25:31.848277 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 16 04:25:31.849954 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 16 04:25:31.853205 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 16 04:25:31.855365 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 16 04:25:31.859949 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 16 04:25:31.860582 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 16 04:25:31.860613 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 16 04:25:31.881171 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 16 04:25:31.882516 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 16 04:25:31.904131 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (848) Sep 16 04:25:31.904210 kernel: BTRFS info (device sda6): first mount of filesystem a546938e-7af2-44ea-b88d-218d567c463b Sep 16 04:25:31.905478 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 16 04:25:31.917827 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 16 04:25:31.917929 kernel: BTRFS info (device sda6): turning on async discard Sep 16 04:25:31.917958 kernel: BTRFS info (device sda6): enabling free space tree Sep 16 04:25:31.923024 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 16 04:25:31.935431 initrd-setup-root[875]: cut: /sysroot/etc/passwd: No such file or directory Sep 16 04:25:31.940285 coreos-metadata[850]: Sep 16 04:25:31.940 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Sep 16 04:25:31.944087 coreos-metadata[850]: Sep 16 04:25:31.943 INFO Fetch successful Sep 16 04:25:31.944677 coreos-metadata[850]: Sep 16 04:25:31.944 INFO wrote hostname ci-4459-0-0-n-0223e12d7a to /sysroot/etc/hostname Sep 16 04:25:31.946698 initrd-setup-root[882]: cut: /sysroot/etc/group: No such file or directory Sep 16 04:25:31.951147 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 16 04:25:31.955853 initrd-setup-root[890]: cut: /sysroot/etc/shadow: No such file or directory Sep 16 04:25:31.961891 initrd-setup-root[897]: cut: /sysroot/etc/gshadow: No such file or directory Sep 16 04:25:32.054571 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 16 04:25:32.056469 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 16 04:25:32.057714 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 16 04:25:32.080773 kernel: BTRFS info (device sda6): last unmount of filesystem a546938e-7af2-44ea-b88d-218d567c463b Sep 16 04:25:32.098875 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 16 04:25:32.117795 ignition[964]: INFO : Ignition 2.22.0 Sep 16 04:25:32.117795 ignition[964]: INFO : Stage: mount Sep 16 04:25:32.117795 ignition[964]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:25:32.117795 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 16 04:25:32.121082 ignition[964]: INFO : mount: mount passed Sep 16 04:25:32.121082 ignition[964]: INFO : Ignition finished successfully Sep 16 04:25:32.123068 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 16 04:25:32.126183 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 16 04:25:32.139418 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 16 04:25:32.146998 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 16 04:25:32.169470 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (976) Sep 16 04:25:32.169548 kernel: BTRFS info (device sda6): first mount of filesystem a546938e-7af2-44ea-b88d-218d567c463b Sep 16 04:25:32.169573 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 16 04:25:32.174044 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 16 04:25:32.174103 kernel: BTRFS info (device sda6): turning on async discard Sep 16 04:25:32.174128 kernel: BTRFS info (device sda6): enabling free space tree Sep 16 04:25:32.177192 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 16 04:25:32.213834 ignition[993]: INFO : Ignition 2.22.0 Sep 16 04:25:32.213834 ignition[993]: INFO : Stage: files Sep 16 04:25:32.214917 ignition[993]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:25:32.214917 ignition[993]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 16 04:25:32.214917 ignition[993]: DEBUG : files: compiled without relabeling support, skipping Sep 16 04:25:32.217275 ignition[993]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 16 04:25:32.217275 ignition[993]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 16 04:25:32.219524 ignition[993]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 16 04:25:32.220429 ignition[993]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 16 04:25:32.221320 unknown[993]: wrote ssh authorized keys file for user: core Sep 16 04:25:32.222383 ignition[993]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 16 04:25:32.224399 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 16 04:25:32.224399 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 16 04:25:32.320175 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 16 04:25:32.603035 systemd-networkd[811]: eth1: Gained IPv6LL Sep 16 04:25:32.604667 systemd-networkd[811]: eth0: Gained IPv6LL Sep 16 04:25:33.031293 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 16 04:25:33.031293 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 16 04:25:33.031293 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 16 04:25:33.238501 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 16 04:25:33.312094 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 16 04:25:33.312094 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 16 04:25:33.316624 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 16 04:25:33.316624 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 16 04:25:33.316624 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 16 04:25:33.316624 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 16 04:25:33.316624 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 16 04:25:33.316624 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 16 04:25:33.316624 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 16 04:25:33.316624 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 16 04:25:33.325053 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 16 04:25:33.325053 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 16 04:25:33.325053 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 16 04:25:33.325053 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 16 04:25:33.325053 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 16 04:25:33.554643 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 16 04:25:33.763894 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 16 04:25:33.765255 ignition[993]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 16 04:25:33.767080 ignition[993]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 16 04:25:33.769766 ignition[993]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 16 04:25:33.771991 ignition[993]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 16 04:25:33.771991 ignition[993]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 16 04:25:33.771991 ignition[993]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 16 04:25:33.771991 ignition[993]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 16 04:25:33.771991 ignition[993]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 16 04:25:33.771991 ignition[993]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Sep 16 04:25:33.771991 ignition[993]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Sep 16 04:25:33.771991 ignition[993]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 16 04:25:33.771991 ignition[993]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 16 04:25:33.771991 ignition[993]: INFO : files: files passed Sep 16 04:25:33.771991 ignition[993]: INFO : Ignition finished successfully Sep 16 04:25:33.773018 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 16 04:25:33.775252 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 16 04:25:33.781385 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 16 04:25:33.803704 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 16 04:25:33.803831 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 16 04:25:33.810357 initrd-setup-root-after-ignition[1023]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 16 04:25:33.810357 initrd-setup-root-after-ignition[1023]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 16 04:25:33.813749 initrd-setup-root-after-ignition[1027]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 16 04:25:33.815957 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 16 04:25:33.817982 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 16 04:25:33.820407 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 16 04:25:33.879052 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 16 04:25:33.879197 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 16 04:25:33.881026 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 16 04:25:33.882218 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 16 04:25:33.883350 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 16 04:25:33.884204 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 16 04:25:33.925495 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 16 04:25:33.928356 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 16 04:25:33.946338 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 16 04:25:33.948108 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 04:25:33.949621 systemd[1]: Stopped target timers.target - Timer Units. Sep 16 04:25:33.950428 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 16 04:25:33.950679 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 16 04:25:33.953112 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 16 04:25:33.953826 systemd[1]: Stopped target basic.target - Basic System. Sep 16 04:25:33.954856 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 16 04:25:33.957797 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 16 04:25:33.959428 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 16 04:25:33.960601 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 16 04:25:33.961892 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 16 04:25:33.963106 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 16 04:25:33.964357 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 16 04:25:33.965470 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 16 04:25:33.966381 systemd[1]: Stopped target swap.target - Swaps. Sep 16 04:25:33.967197 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 16 04:25:33.967378 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 16 04:25:33.968580 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 16 04:25:33.969671 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 04:25:33.970719 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 16 04:25:33.970887 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 04:25:33.972052 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 16 04:25:33.972222 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 16 04:25:33.973663 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 16 04:25:33.973850 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 16 04:25:33.974918 systemd[1]: ignition-files.service: Deactivated successfully. Sep 16 04:25:33.975064 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 16 04:25:33.975902 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 16 04:25:33.976043 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 16 04:25:33.979147 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 16 04:25:33.979810 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 16 04:25:33.981938 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 04:25:33.985375 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 16 04:25:33.985882 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 16 04:25:33.986051 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 04:25:33.986903 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 16 04:25:33.987371 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 16 04:25:33.993511 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 16 04:25:33.996123 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 16 04:25:34.009709 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 16 04:25:34.013289 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 16 04:25:34.014033 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 16 04:25:34.018456 ignition[1047]: INFO : Ignition 2.22.0 Sep 16 04:25:34.018456 ignition[1047]: INFO : Stage: umount Sep 16 04:25:34.018456 ignition[1047]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:25:34.018456 ignition[1047]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 16 04:25:34.018456 ignition[1047]: INFO : umount: umount passed Sep 16 04:25:34.018456 ignition[1047]: INFO : Ignition finished successfully Sep 16 04:25:34.020294 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 16 04:25:34.022030 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 16 04:25:34.023816 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 16 04:25:34.023949 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 16 04:25:34.024705 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 16 04:25:34.024799 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 16 04:25:34.025791 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 16 04:25:34.025853 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 16 04:25:34.026886 systemd[1]: Stopped target network.target - Network. Sep 16 04:25:34.027707 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 16 04:25:34.027795 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 16 04:25:34.028852 systemd[1]: Stopped target paths.target - Path Units. Sep 16 04:25:34.030375 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 16 04:25:34.033861 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 04:25:34.034668 systemd[1]: Stopped target slices.target - Slice Units. Sep 16 04:25:34.036046 systemd[1]: Stopped target sockets.target - Socket Units. Sep 16 04:25:34.037198 systemd[1]: iscsid.socket: Deactivated successfully. Sep 16 04:25:34.037262 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 16 04:25:34.038235 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 16 04:25:34.038282 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 16 04:25:34.039427 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 16 04:25:34.039503 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 16 04:25:34.040672 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 16 04:25:34.040729 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 16 04:25:34.041779 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 16 04:25:34.041877 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 16 04:25:34.043022 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 16 04:25:34.043957 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 16 04:25:34.049719 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 16 04:25:34.050568 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 16 04:25:34.053889 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 16 04:25:34.055212 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 16 04:25:34.055914 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 04:25:34.058592 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 16 04:25:34.062054 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 16 04:25:34.063264 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 16 04:25:34.066522 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 16 04:25:34.067563 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 16 04:25:34.068424 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 16 04:25:34.068488 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 16 04:25:34.071410 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 16 04:25:34.073894 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 16 04:25:34.073982 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 16 04:25:34.075536 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 16 04:25:34.075579 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:25:34.081542 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 16 04:25:34.081599 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 16 04:25:34.082799 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 04:25:34.086528 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 16 04:25:34.097930 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 16 04:25:34.100259 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 04:25:34.103283 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 16 04:25:34.103332 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 16 04:25:34.104516 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 16 04:25:34.104557 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 04:25:34.106469 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 16 04:25:34.106536 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 16 04:25:34.109146 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 16 04:25:34.109208 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 16 04:25:34.110855 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 16 04:25:34.110910 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 16 04:25:34.113554 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 16 04:25:34.115732 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 16 04:25:34.115812 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 04:25:34.120517 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 16 04:25:34.120578 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 04:25:34.124000 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:25:34.124054 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:25:34.128500 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 16 04:25:34.128626 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 16 04:25:34.134949 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 16 04:25:34.135064 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 16 04:25:34.136601 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 16 04:25:34.140570 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 16 04:25:34.173035 systemd[1]: Switching root. Sep 16 04:25:34.218855 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Sep 16 04:25:34.218962 systemd-journald[245]: Journal stopped Sep 16 04:25:35.181216 kernel: SELinux: policy capability network_peer_controls=1 Sep 16 04:25:35.181428 kernel: SELinux: policy capability open_perms=1 Sep 16 04:25:35.181449 kernel: SELinux: policy capability extended_socket_class=1 Sep 16 04:25:35.181461 kernel: SELinux: policy capability always_check_network=0 Sep 16 04:25:35.181470 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 16 04:25:35.181482 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 16 04:25:35.181491 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 16 04:25:35.181503 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 16 04:25:35.181515 kernel: SELinux: policy capability userspace_initial_context=0 Sep 16 04:25:35.182925 kernel: audit: type=1403 audit(1757996734.390:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 16 04:25:35.182951 systemd[1]: Successfully loaded SELinux policy in 54.168ms. Sep 16 04:25:35.182970 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.855ms. Sep 16 04:25:35.182981 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 16 04:25:35.182992 systemd[1]: Detected virtualization kvm. Sep 16 04:25:35.183005 systemd[1]: Detected architecture arm64. Sep 16 04:25:35.183015 systemd[1]: Detected first boot. Sep 16 04:25:35.183025 systemd[1]: Hostname set to . Sep 16 04:25:35.183034 systemd[1]: Initializing machine ID from VM UUID. Sep 16 04:25:35.183045 zram_generator::config[1090]: No configuration found. Sep 16 04:25:35.183055 kernel: NET: Registered PF_VSOCK protocol family Sep 16 04:25:35.183067 systemd[1]: Populated /etc with preset unit settings. Sep 16 04:25:35.183080 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 16 04:25:35.183090 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 16 04:25:35.183100 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 16 04:25:35.183110 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 16 04:25:35.183120 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 16 04:25:35.183131 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 16 04:25:35.183143 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 16 04:25:35.183152 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 16 04:25:35.183162 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 16 04:25:35.183172 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 16 04:25:35.183181 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 16 04:25:35.183191 systemd[1]: Created slice user.slice - User and Session Slice. Sep 16 04:25:35.183201 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 04:25:35.183212 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 04:25:35.183222 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 16 04:25:35.183233 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 16 04:25:35.183243 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 16 04:25:35.183253 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 16 04:25:35.183263 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 16 04:25:35.183274 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 04:25:35.183284 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 16 04:25:35.183295 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 16 04:25:35.183305 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 16 04:25:35.183315 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 16 04:25:35.183326 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 16 04:25:35.183335 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 04:25:35.183349 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 16 04:25:35.183359 systemd[1]: Reached target slices.target - Slice Units. Sep 16 04:25:35.183369 systemd[1]: Reached target swap.target - Swaps. Sep 16 04:25:35.183378 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 16 04:25:35.183390 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 16 04:25:35.183400 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 16 04:25:35.183410 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 16 04:25:35.183419 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 16 04:25:35.183429 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 04:25:35.183439 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 16 04:25:35.183449 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 16 04:25:35.183461 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 16 04:25:35.183471 systemd[1]: Mounting media.mount - External Media Directory... Sep 16 04:25:35.183482 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 16 04:25:35.183491 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 16 04:25:35.183501 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 16 04:25:35.183512 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 16 04:25:35.183522 systemd[1]: Reached target machines.target - Containers. Sep 16 04:25:35.183531 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 16 04:25:35.183541 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:25:35.183551 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 16 04:25:35.183561 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 16 04:25:35.183573 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 04:25:35.183583 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 16 04:25:35.183592 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 04:25:35.183602 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 16 04:25:35.183612 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 04:25:35.183622 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 16 04:25:35.183634 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 16 04:25:35.183646 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 16 04:25:35.183655 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 16 04:25:35.183665 systemd[1]: Stopped systemd-fsck-usr.service. Sep 16 04:25:35.183676 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:25:35.183686 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 16 04:25:35.183695 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 16 04:25:35.183707 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 16 04:25:35.183717 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 16 04:25:35.183727 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 16 04:25:35.183738 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 16 04:25:35.183747 systemd[1]: verity-setup.service: Deactivated successfully. Sep 16 04:25:35.183803 systemd[1]: Stopped verity-setup.service. Sep 16 04:25:35.183854 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 16 04:25:35.183867 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 16 04:25:35.183885 systemd[1]: Mounted media.mount - External Media Directory. Sep 16 04:25:35.183900 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 16 04:25:35.183912 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 16 04:25:35.183925 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 16 04:25:35.183937 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 04:25:35.183949 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 16 04:25:35.183963 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 16 04:25:35.183973 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 04:25:35.183983 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 04:25:35.183993 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 04:25:35.184003 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 04:25:35.184013 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 16 04:25:35.184024 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 04:25:35.184034 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 16 04:25:35.184800 kernel: fuse: init (API version 7.41) Sep 16 04:25:35.184837 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 16 04:25:35.184891 systemd-journald[1161]: Collecting audit messages is disabled. Sep 16 04:25:35.184919 systemd-journald[1161]: Journal started Sep 16 04:25:35.184940 systemd-journald[1161]: Runtime Journal (/run/log/journal/2b2be05848b645f98b73bbf7373403fb) is 8M, max 76.5M, 68.5M free. Sep 16 04:25:34.902861 systemd[1]: Queued start job for default target multi-user.target. Sep 16 04:25:34.928431 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 16 04:25:34.929109 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 16 04:25:35.190412 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 04:25:35.190466 systemd[1]: Started systemd-journald.service - Journal Service. Sep 16 04:25:35.192630 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 16 04:25:35.193316 kernel: loop: module loaded Sep 16 04:25:35.194105 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 16 04:25:35.194973 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 16 04:25:35.195728 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 16 04:25:35.208020 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 04:25:35.208220 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 04:25:35.210835 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 16 04:25:35.221006 kernel: ACPI: bus type drm_connector registered Sep 16 04:25:35.224594 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 16 04:25:35.225156 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 16 04:25:35.227277 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 16 04:25:35.227316 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 16 04:25:35.229973 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 16 04:25:35.234953 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 16 04:25:35.235662 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:25:35.237951 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 16 04:25:35.248117 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 16 04:25:35.249117 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 16 04:25:35.251977 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 16 04:25:35.252679 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 16 04:25:35.257077 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 16 04:25:35.267410 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 16 04:25:35.271233 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 16 04:25:35.272546 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:25:35.292471 systemd-journald[1161]: Time spent on flushing to /var/log/journal/2b2be05848b645f98b73bbf7373403fb is 41.712ms for 1173 entries. Sep 16 04:25:35.292471 systemd-journald[1161]: System Journal (/var/log/journal/2b2be05848b645f98b73bbf7373403fb) is 8M, max 584.8M, 576.8M free. Sep 16 04:25:35.351708 systemd-journald[1161]: Received client request to flush runtime journal. Sep 16 04:25:35.351792 kernel: loop0: detected capacity change from 0 to 119368 Sep 16 04:25:35.351898 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 16 04:25:35.301863 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 16 04:25:35.304039 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 16 04:25:35.308205 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 16 04:25:35.356081 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 16 04:25:35.361881 kernel: loop1: detected capacity change from 0 to 207008 Sep 16 04:25:35.375999 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 16 04:25:35.377906 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 04:25:35.392713 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 16 04:25:35.396398 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 16 04:25:35.408009 kernel: loop2: detected capacity change from 0 to 8 Sep 16 04:25:35.425804 kernel: loop3: detected capacity change from 0 to 100632 Sep 16 04:25:35.435372 systemd-tmpfiles[1226]: ACLs are not supported, ignoring. Sep 16 04:25:35.435617 systemd-tmpfiles[1226]: ACLs are not supported, ignoring. Sep 16 04:25:35.444303 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 04:25:35.468905 kernel: loop4: detected capacity change from 0 to 119368 Sep 16 04:25:35.489831 kernel: loop5: detected capacity change from 0 to 207008 Sep 16 04:25:35.512612 kernel: loop6: detected capacity change from 0 to 8 Sep 16 04:25:35.515819 kernel: loop7: detected capacity change from 0 to 100632 Sep 16 04:25:35.534511 (sd-merge)[1231]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Sep 16 04:25:35.535971 (sd-merge)[1231]: Merged extensions into '/usr'. Sep 16 04:25:35.543653 systemd[1]: Reload requested from client PID 1211 ('systemd-sysext') (unit systemd-sysext.service)... Sep 16 04:25:35.543675 systemd[1]: Reloading... Sep 16 04:25:35.689796 zram_generator::config[1260]: No configuration found. Sep 16 04:25:35.792784 ldconfig[1207]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 16 04:25:35.896189 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 16 04:25:35.896294 systemd[1]: Reloading finished in 352 ms. Sep 16 04:25:35.917212 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 16 04:25:35.919821 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 16 04:25:35.930992 systemd[1]: Starting ensure-sysext.service... Sep 16 04:25:35.937318 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 16 04:25:35.947811 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 16 04:25:35.952618 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 16 04:25:35.958224 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 04:25:35.964329 systemd[1]: Reload requested from client PID 1294 ('systemctl') (unit ensure-sysext.service)... Sep 16 04:25:35.964342 systemd[1]: Reloading... Sep 16 04:25:35.964629 systemd-tmpfiles[1295]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 16 04:25:35.964653 systemd-tmpfiles[1295]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 16 04:25:35.965011 systemd-tmpfiles[1295]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 16 04:25:35.965203 systemd-tmpfiles[1295]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 16 04:25:35.966479 systemd-tmpfiles[1295]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 16 04:25:35.966828 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. Sep 16 04:25:35.966982 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. Sep 16 04:25:35.971351 systemd-tmpfiles[1295]: Detected autofs mount point /boot during canonicalization of boot. Sep 16 04:25:35.971593 systemd-tmpfiles[1295]: Skipping /boot Sep 16 04:25:35.982501 systemd-tmpfiles[1295]: Detected autofs mount point /boot during canonicalization of boot. Sep 16 04:25:35.982642 systemd-tmpfiles[1295]: Skipping /boot Sep 16 04:25:36.030914 systemd-udevd[1299]: Using default interface naming scheme 'v255'. Sep 16 04:25:36.064899 zram_generator::config[1324]: No configuration found. Sep 16 04:25:36.293983 systemd[1]: Reloading finished in 329 ms. Sep 16 04:25:36.314352 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 04:25:36.323860 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 04:25:36.324942 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 16 04:25:36.334499 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 16 04:25:36.336785 kernel: mousedev: PS/2 mouse device common for all mice Sep 16 04:25:36.353924 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 16 04:25:36.356767 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 16 04:25:36.359791 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 16 04:25:36.363913 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 16 04:25:36.369010 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 16 04:25:36.370918 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 16 04:25:36.379373 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:25:36.381009 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 04:25:36.393142 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 04:25:36.397129 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 04:25:36.398889 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:25:36.399012 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:25:36.406927 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 16 04:25:36.408841 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:25:36.409000 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:25:36.409084 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:25:36.412483 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:25:36.419112 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 16 04:25:36.420954 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:25:36.421083 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:25:36.425833 systemd[1]: Finished ensure-sysext.service. Sep 16 04:25:36.426839 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 16 04:25:36.439017 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 16 04:25:36.444402 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 16 04:25:36.479039 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 16 04:25:36.481435 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 16 04:25:36.486381 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 16 04:25:36.525107 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 16 04:25:36.528351 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 04:25:36.528866 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 04:25:36.536549 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 04:25:36.537878 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 04:25:36.539132 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 16 04:25:36.541341 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 04:25:36.542112 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 04:25:36.543039 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 16 04:25:36.544053 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 16 04:25:36.551335 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 16 04:25:36.563281 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 16 04:25:36.567963 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 16 04:25:36.569033 augenrules[1452]: No rules Sep 16 04:25:36.570165 systemd[1]: audit-rules.service: Deactivated successfully. Sep 16 04:25:36.570367 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 16 04:25:36.599470 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 16 04:25:36.602981 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 16 04:25:36.647933 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Sep 16 04:25:36.648021 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 16 04:25:36.648039 kernel: [drm] features: -context_init Sep 16 04:25:36.651084 kernel: [drm] number of scanouts: 1 Sep 16 04:25:36.651196 kernel: [drm] number of cap sets: 0 Sep 16 04:25:36.659842 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Sep 16 04:25:36.659968 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:25:36.662009 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 04:25:36.665782 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Sep 16 04:25:36.668064 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 04:25:36.670277 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 04:25:36.672975 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:25:36.673026 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:25:36.673053 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 16 04:25:36.679022 kernel: Console: switching to colour frame buffer device 160x50 Sep 16 04:25:36.688021 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 16 04:25:36.700395 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 04:25:36.709328 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 04:25:36.717664 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 04:25:36.718045 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 04:25:36.719118 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 04:25:36.719848 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 04:25:36.720969 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 16 04:25:36.721016 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 16 04:25:36.729296 systemd-networkd[1409]: lo: Link UP Sep 16 04:25:36.729311 systemd-networkd[1409]: lo: Gained carrier Sep 16 04:25:36.733139 systemd-networkd[1409]: Enumeration completed Sep 16 04:25:36.733270 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 16 04:25:36.735269 systemd-networkd[1409]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:25:36.735294 systemd-networkd[1409]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 04:25:36.736213 systemd-networkd[1409]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:25:36.736224 systemd-networkd[1409]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 04:25:36.736582 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 16 04:25:36.737451 systemd-networkd[1409]: eth0: Link UP Sep 16 04:25:36.737548 systemd-networkd[1409]: eth0: Gained carrier Sep 16 04:25:36.737563 systemd-networkd[1409]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:25:36.739960 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 16 04:25:36.744148 systemd-networkd[1409]: eth1: Link UP Sep 16 04:25:36.744844 systemd-networkd[1409]: eth1: Gained carrier Sep 16 04:25:36.744885 systemd-networkd[1409]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:25:36.794896 systemd-networkd[1409]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 16 04:25:36.796251 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 16 04:25:36.797001 systemd[1]: Reached target time-set.target - System Time Set. Sep 16 04:25:36.806684 systemd-networkd[1409]: eth0: DHCPv4 address 138.199.234.3/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 16 04:25:36.807426 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:25:36.811379 systemd-timesyncd[1426]: Network configuration changed, trying to establish connection. Sep 16 04:25:36.814582 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 16 04:25:36.852420 systemd-resolved[1410]: Positive Trust Anchors: Sep 16 04:25:36.853543 systemd-resolved[1410]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 16 04:25:36.853585 systemd-resolved[1410]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 16 04:25:36.860040 systemd-resolved[1410]: Using system hostname 'ci-4459-0-0-n-0223e12d7a'. Sep 16 04:25:36.863769 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 16 04:25:36.864659 systemd[1]: Reached target network.target - Network. Sep 16 04:25:36.865921 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 16 04:25:36.872530 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:25:36.873892 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:25:36.878118 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:25:36.934642 systemd-timesyncd[1426]: Contacted time server 141.144.241.16:123 (0.flatcar.pool.ntp.org). Sep 16 04:25:36.934721 systemd-timesyncd[1426]: Initial clock synchronization to Tue 2025-09-16 04:25:37.035084 UTC. Sep 16 04:25:36.950878 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:25:36.953060 systemd[1]: Reached target sysinit.target - System Initialization. Sep 16 04:25:36.953852 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 16 04:25:36.954609 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 16 04:25:36.955665 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 16 04:25:36.956464 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 16 04:25:36.957282 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 16 04:25:36.958078 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 16 04:25:36.958193 systemd[1]: Reached target paths.target - Path Units. Sep 16 04:25:36.958728 systemd[1]: Reached target timers.target - Timer Units. Sep 16 04:25:36.960638 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 16 04:25:36.963025 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 16 04:25:36.965612 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 16 04:25:36.966557 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 16 04:25:36.967359 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 16 04:25:36.971900 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 16 04:25:36.973135 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 16 04:25:36.974855 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 16 04:25:36.975721 systemd[1]: Reached target sockets.target - Socket Units. Sep 16 04:25:36.976596 systemd[1]: Reached target basic.target - Basic System. Sep 16 04:25:36.977347 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 16 04:25:36.977384 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 16 04:25:36.978585 systemd[1]: Starting containerd.service - containerd container runtime... Sep 16 04:25:36.982103 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 16 04:25:36.987994 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 16 04:25:36.993563 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 16 04:25:36.997455 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 16 04:25:37.000023 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 16 04:25:37.000613 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 16 04:25:37.005642 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 16 04:25:37.012617 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 16 04:25:37.018408 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Sep 16 04:25:37.023737 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 16 04:25:37.025407 jq[1509]: false Sep 16 04:25:37.030010 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 16 04:25:37.037073 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 16 04:25:37.041820 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 16 04:25:37.049525 extend-filesystems[1512]: Found /dev/sda6 Sep 16 04:25:37.051604 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 16 04:25:37.057053 systemd[1]: Starting update-engine.service - Update Engine... Sep 16 04:25:37.061247 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 16 04:25:37.062375 extend-filesystems[1512]: Found /dev/sda9 Sep 16 04:25:37.064389 coreos-metadata[1506]: Sep 16 04:25:37.064 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Sep 16 04:25:37.067814 coreos-metadata[1506]: Sep 16 04:25:37.067 INFO Fetch successful Sep 16 04:25:37.067814 coreos-metadata[1506]: Sep 16 04:25:37.067 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Sep 16 04:25:37.067850 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 16 04:25:37.072323 coreos-metadata[1506]: Sep 16 04:25:37.069 INFO Fetch successful Sep 16 04:25:37.070215 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 16 04:25:37.070411 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 16 04:25:37.080995 extend-filesystems[1512]: Checking size of /dev/sda9 Sep 16 04:25:37.092627 systemd[1]: motdgen.service: Deactivated successfully. Sep 16 04:25:37.092873 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 16 04:25:37.095281 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 16 04:25:37.095473 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 16 04:25:37.112273 jq[1530]: true Sep 16 04:25:37.133059 extend-filesystems[1512]: Resized partition /dev/sda9 Sep 16 04:25:37.145832 tar[1534]: linux-arm64/LICENSE Sep 16 04:25:37.145832 tar[1534]: linux-arm64/helm Sep 16 04:25:37.144446 (ntainerd)[1543]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 16 04:25:37.148381 extend-filesystems[1557]: resize2fs 1.47.3 (8-Jul-2025) Sep 16 04:25:37.155919 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Sep 16 04:25:37.168120 jq[1554]: true Sep 16 04:25:37.206345 dbus-daemon[1507]: [system] SELinux support is enabled Sep 16 04:25:37.206738 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 16 04:25:37.211620 update_engine[1529]: I20250916 04:25:37.211297 1529 main.cc:92] Flatcar Update Engine starting Sep 16 04:25:37.212060 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 16 04:25:37.212095 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 16 04:25:37.212855 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 16 04:25:37.213651 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 16 04:25:37.224508 systemd-logind[1522]: New seat seat0. Sep 16 04:25:37.226194 systemd-logind[1522]: Watching system buttons on /dev/input/event0 (Power Button) Sep 16 04:25:37.226508 systemd-logind[1522]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Sep 16 04:25:37.227283 systemd[1]: Started systemd-logind.service - User Login Management. Sep 16 04:25:37.244862 dbus-daemon[1507]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 16 04:25:37.254329 systemd[1]: Started update-engine.service - Update Engine. Sep 16 04:25:37.256350 update_engine[1529]: I20250916 04:25:37.254644 1529 update_check_scheduler.cc:74] Next update check in 11m0s Sep 16 04:25:37.285249 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 16 04:25:37.345150 bash[1578]: Updated "/home/core/.ssh/authorized_keys" Sep 16 04:25:37.347558 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 16 04:25:37.360185 systemd[1]: Starting sshkeys.service... Sep 16 04:25:37.382113 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 16 04:25:37.385880 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 16 04:25:37.404822 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Sep 16 04:25:37.415392 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 16 04:25:37.420988 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 16 04:25:37.426787 extend-filesystems[1557]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Sep 16 04:25:37.426787 extend-filesystems[1557]: old_desc_blocks = 1, new_desc_blocks = 5 Sep 16 04:25:37.426787 extend-filesystems[1557]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Sep 16 04:25:37.429673 extend-filesystems[1512]: Resized filesystem in /dev/sda9 Sep 16 04:25:37.430096 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 16 04:25:37.432622 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 16 04:25:37.501943 containerd[1543]: time="2025-09-16T04:25:37Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 16 04:25:37.503083 containerd[1543]: time="2025-09-16T04:25:37.503001779Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 16 04:25:37.520669 containerd[1543]: time="2025-09-16T04:25:37.520617505Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.153µs" Sep 16 04:25:37.520818 containerd[1543]: time="2025-09-16T04:25:37.520795272Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 16 04:25:37.520941 containerd[1543]: time="2025-09-16T04:25:37.520918765Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 16 04:25:37.521405 containerd[1543]: time="2025-09-16T04:25:37.521340399Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 16 04:25:37.521544 containerd[1543]: time="2025-09-16T04:25:37.521522257Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 16 04:25:37.521642 containerd[1543]: time="2025-09-16T04:25:37.521627199Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 16 04:25:37.521829 containerd[1543]: time="2025-09-16T04:25:37.521808490Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 16 04:25:37.521954 containerd[1543]: time="2025-09-16T04:25:37.521937370Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 16 04:25:37.522540 containerd[1543]: time="2025-09-16T04:25:37.522515588Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 16 04:25:37.522664 containerd[1543]: time="2025-09-16T04:25:37.522598254Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 16 04:25:37.522737 containerd[1543]: time="2025-09-16T04:25:37.522717859Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 16 04:25:37.523460 containerd[1543]: time="2025-09-16T04:25:37.522932564Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 16 04:25:37.523460 containerd[1543]: time="2025-09-16T04:25:37.523053506Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 16 04:25:37.523460 containerd[1543]: time="2025-09-16T04:25:37.523252132Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 16 04:25:37.523460 containerd[1543]: time="2025-09-16T04:25:37.523301181Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 16 04:25:37.523460 containerd[1543]: time="2025-09-16T04:25:37.523317341Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 16 04:25:37.523460 containerd[1543]: time="2025-09-16T04:25:37.523348731Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 16 04:25:37.527555 containerd[1543]: time="2025-09-16T04:25:37.527142184Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 16 04:25:37.527555 containerd[1543]: time="2025-09-16T04:25:37.527268431Z" level=info msg="metadata content store policy set" policy=shared Sep 16 04:25:37.531885 coreos-metadata[1594]: Sep 16 04:25:37.531 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Sep 16 04:25:37.533270 containerd[1543]: time="2025-09-16T04:25:37.533179330Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 16 04:25:37.533867 containerd[1543]: time="2025-09-16T04:25:37.533411330Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 16 04:25:37.533867 containerd[1543]: time="2025-09-16T04:25:37.533437819Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 16 04:25:37.533867 containerd[1543]: time="2025-09-16T04:25:37.533451914Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 16 04:25:37.533867 containerd[1543]: time="2025-09-16T04:25:37.533478930Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 16 04:25:37.533867 containerd[1543]: time="2025-09-16T04:25:37.533493632Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 16 04:25:37.533867 containerd[1543]: time="2025-09-16T04:25:37.533507606Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 16 04:25:37.533867 containerd[1543]: time="2025-09-16T04:25:37.533520242Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 16 04:25:37.533867 containerd[1543]: time="2025-09-16T04:25:37.533539603Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 16 04:25:37.533867 containerd[1543]: time="2025-09-16T04:25:37.533554832Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 16 04:25:37.533867 containerd[1543]: time="2025-09-16T04:25:37.533564796Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 16 04:25:37.533867 containerd[1543]: time="2025-09-16T04:25:37.533578040Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 16 04:25:37.535778 coreos-metadata[1594]: Sep 16 04:25:37.534 INFO Fetch successful Sep 16 04:25:37.536188 containerd[1543]: time="2025-09-16T04:25:37.536158555Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 16 04:25:37.536276 containerd[1543]: time="2025-09-16T04:25:37.536262971Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 16 04:25:37.536344 containerd[1543]: time="2025-09-16T04:25:37.536330084Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 16 04:25:37.536526 containerd[1543]: time="2025-09-16T04:25:37.536506515Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 16 04:25:37.536611 containerd[1543]: time="2025-09-16T04:25:37.536596755Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 16 04:25:37.536692 containerd[1543]: time="2025-09-16T04:25:37.536678368Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 16 04:25:37.536895 containerd[1543]: time="2025-09-16T04:25:37.536837828Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 16 04:25:37.537032 containerd[1543]: time="2025-09-16T04:25:37.536971933Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 16 04:25:37.537109 containerd[1543]: time="2025-09-16T04:25:37.537094130Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 16 04:25:37.537353 containerd[1543]: time="2025-09-16T04:25:37.537173353Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 16 04:25:37.538179 containerd[1543]: time="2025-09-16T04:25:37.538123995Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 16 04:25:37.538367 locksmithd[1579]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 16 04:25:37.539777 containerd[1543]: time="2025-09-16T04:25:37.539082534Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 16 04:25:37.539777 containerd[1543]: time="2025-09-16T04:25:37.539118946Z" level=info msg="Start snapshots syncer" Sep 16 04:25:37.539777 containerd[1543]: time="2025-09-16T04:25:37.539498255Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 16 04:25:37.540017 unknown[1594]: wrote ssh authorized keys file for user: core Sep 16 04:25:37.540252 containerd[1543]: time="2025-09-16T04:25:37.540173072Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 16 04:25:37.540613 containerd[1543]: time="2025-09-16T04:25:37.540235487Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 16 04:25:37.542771 containerd[1543]: time="2025-09-16T04:25:37.540590535Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 16 04:25:37.542771 containerd[1543]: time="2025-09-16T04:25:37.541057978Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 16 04:25:37.542771 containerd[1543]: time="2025-09-16T04:25:37.541089084Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 16 04:25:37.542771 containerd[1543]: time="2025-09-16T04:25:37.541101113Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 16 04:25:37.542771 containerd[1543]: time="2025-09-16T04:25:37.541115694Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 16 04:25:37.542771 containerd[1543]: time="2025-09-16T04:25:37.541137080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 16 04:25:37.542771 containerd[1543]: time="2025-09-16T04:25:37.541149798Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 16 04:25:37.542771 containerd[1543]: time="2025-09-16T04:25:37.541162799Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 16 04:25:37.542771 containerd[1543]: time="2025-09-16T04:25:37.541191353Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 16 04:25:37.542771 containerd[1543]: time="2025-09-16T04:25:37.541203585Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 16 04:25:37.542771 containerd[1543]: time="2025-09-16T04:25:37.541214319Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 16 04:25:37.542771 containerd[1543]: time="2025-09-16T04:25:37.541248543Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 16 04:25:37.542771 containerd[1543]: time="2025-09-16T04:25:37.541263651Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 16 04:25:37.542771 containerd[1543]: time="2025-09-16T04:25:37.541273372Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 16 04:25:37.543028 containerd[1543]: time="2025-09-16T04:25:37.541283052Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 16 04:25:37.543028 containerd[1543]: time="2025-09-16T04:25:37.541291355Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 16 04:25:37.543028 containerd[1543]: time="2025-09-16T04:25:37.541301440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 16 04:25:37.543028 containerd[1543]: time="2025-09-16T04:25:37.541319140Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 16 04:25:37.543028 containerd[1543]: time="2025-09-16T04:25:37.541396905Z" level=info msg="runtime interface created" Sep 16 04:25:37.543028 containerd[1543]: time="2025-09-16T04:25:37.541401887Z" level=info msg="created NRI interface" Sep 16 04:25:37.543028 containerd[1543]: time="2025-09-16T04:25:37.541410514Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 16 04:25:37.543028 containerd[1543]: time="2025-09-16T04:25:37.541423556Z" level=info msg="Connect containerd service" Sep 16 04:25:37.543028 containerd[1543]: time="2025-09-16T04:25:37.541451746Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 16 04:25:37.543028 containerd[1543]: time="2025-09-16T04:25:37.542906445Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 16 04:25:37.585840 update-ssh-keys[1603]: Updated "/home/core/.ssh/authorized_keys" Sep 16 04:25:37.587224 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 16 04:25:37.591713 systemd[1]: Finished sshkeys.service. Sep 16 04:25:37.715479 containerd[1543]: time="2025-09-16T04:25:37.715160326Z" level=info msg="Start subscribing containerd event" Sep 16 04:25:37.715479 containerd[1543]: time="2025-09-16T04:25:37.715419342Z" level=info msg="Start recovering state" Sep 16 04:25:37.716540 containerd[1543]: time="2025-09-16T04:25:37.716503318Z" level=info msg="Start event monitor" Sep 16 04:25:37.716582 containerd[1543]: time="2025-09-16T04:25:37.716549249Z" level=info msg="Start cni network conf syncer for default" Sep 16 04:25:37.716847 containerd[1543]: time="2025-09-16T04:25:37.716821022Z" level=info msg="Start streaming server" Sep 16 04:25:37.716879 containerd[1543]: time="2025-09-16T04:25:37.716851400Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 16 04:25:37.716879 containerd[1543]: time="2025-09-16T04:25:37.716860675Z" level=info msg="runtime interface starting up..." Sep 16 04:25:37.716879 containerd[1543]: time="2025-09-16T04:25:37.716866791Z" level=info msg="starting plugins..." Sep 16 04:25:37.718784 containerd[1543]: time="2025-09-16T04:25:37.717236257Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 16 04:25:37.718784 containerd[1543]: time="2025-09-16T04:25:37.717481704Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 16 04:25:37.718784 containerd[1543]: time="2025-09-16T04:25:37.717620062Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 16 04:25:37.718117 systemd[1]: Started containerd.service - containerd container runtime. Sep 16 04:25:37.721426 containerd[1543]: time="2025-09-16T04:25:37.721095770Z" level=info msg="containerd successfully booted in 0.219653s" Sep 16 04:25:37.787933 systemd-networkd[1409]: eth1: Gained IPv6LL Sep 16 04:25:37.795096 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 16 04:25:37.797604 systemd[1]: Reached target network-online.target - Network is Online. Sep 16 04:25:37.804279 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:25:37.809022 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 16 04:25:37.872414 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 16 04:25:37.894975 tar[1534]: linux-arm64/README.md Sep 16 04:25:37.914623 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 16 04:25:38.083940 sshd_keygen[1542]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 16 04:25:38.112284 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 16 04:25:38.116253 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 16 04:25:38.137461 systemd[1]: issuegen.service: Deactivated successfully. Sep 16 04:25:38.137725 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 16 04:25:38.140750 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 16 04:25:38.165613 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 16 04:25:38.170391 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 16 04:25:38.177267 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 16 04:25:38.178101 systemd[1]: Reached target getty.target - Login Prompts. Sep 16 04:25:38.299073 systemd-networkd[1409]: eth0: Gained IPv6LL Sep 16 04:25:38.644921 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:25:38.647951 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 16 04:25:38.652202 systemd[1]: Startup finished in 2.321s (kernel) + 5.781s (initrd) + 4.314s (userspace) = 12.417s. Sep 16 04:25:38.656370 (kubelet)[1654]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:25:39.175926 kubelet[1654]: E0916 04:25:39.175850 1654 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:25:39.179363 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:25:39.179528 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:25:39.180082 systemd[1]: kubelet.service: Consumed 856ms CPU time, 255.5M memory peak. Sep 16 04:25:49.242863 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 16 04:25:49.245400 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:25:49.399732 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:25:49.420357 (kubelet)[1671]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:25:49.472404 kubelet[1671]: E0916 04:25:49.472319 1671 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:25:49.477030 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:25:49.477334 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:25:49.478186 systemd[1]: kubelet.service: Consumed 182ms CPU time, 108.6M memory peak. Sep 16 04:25:59.493394 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 16 04:25:59.497792 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:25:59.678108 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:25:59.691313 (kubelet)[1687]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:25:59.748156 kubelet[1687]: E0916 04:25:59.748024 1687 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:25:59.751558 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:25:59.751736 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:25:59.753864 systemd[1]: kubelet.service: Consumed 181ms CPU time, 106.4M memory peak. Sep 16 04:26:09.993196 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 16 04:26:09.998629 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:26:10.178668 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:26:10.199510 (kubelet)[1702]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:26:10.243043 kubelet[1702]: E0916 04:26:10.242960 1702 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:26:10.245804 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:26:10.245935 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:26:10.246558 systemd[1]: kubelet.service: Consumed 180ms CPU time, 106.9M memory peak. Sep 16 04:26:17.376994 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 16 04:26:17.379138 systemd[1]: Started sshd@0-138.199.234.3:22-139.178.89.65:43306.service - OpenSSH per-connection server daemon (139.178.89.65:43306). Sep 16 04:26:18.391216 sshd[1710]: Accepted publickey for core from 139.178.89.65 port 43306 ssh2: RSA SHA256:hnZQROmedaG+reQAaWvmG41QCRiTlF3QrQA4Qzar5jk Sep 16 04:26:18.394724 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:26:18.404382 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 16 04:26:18.405309 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 16 04:26:18.413781 systemd-logind[1522]: New session 1 of user core. Sep 16 04:26:18.427738 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 16 04:26:18.431588 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 16 04:26:18.449930 (systemd)[1715]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 16 04:26:18.453239 systemd-logind[1522]: New session c1 of user core. Sep 16 04:26:18.595997 systemd[1715]: Queued start job for default target default.target. Sep 16 04:26:18.607131 systemd[1715]: Created slice app.slice - User Application Slice. Sep 16 04:26:18.607189 systemd[1715]: Reached target paths.target - Paths. Sep 16 04:26:18.607246 systemd[1715]: Reached target timers.target - Timers. Sep 16 04:26:18.609476 systemd[1715]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 16 04:26:18.646468 systemd[1715]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 16 04:26:18.647036 systemd[1715]: Reached target sockets.target - Sockets. Sep 16 04:26:18.647349 systemd[1715]: Reached target basic.target - Basic System. Sep 16 04:26:18.647630 systemd[1715]: Reached target default.target - Main User Target. Sep 16 04:26:18.647647 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 16 04:26:18.648275 systemd[1715]: Startup finished in 187ms. Sep 16 04:26:18.658125 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 16 04:26:19.354247 systemd[1]: Started sshd@1-138.199.234.3:22-139.178.89.65:43314.service - OpenSSH per-connection server daemon (139.178.89.65:43314). Sep 16 04:26:20.351488 sshd[1726]: Accepted publickey for core from 139.178.89.65 port 43314 ssh2: RSA SHA256:hnZQROmedaG+reQAaWvmG41QCRiTlF3QrQA4Qzar5jk Sep 16 04:26:20.353453 sshd-session[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:26:20.354748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 16 04:26:20.359293 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:26:20.363183 systemd-logind[1522]: New session 2 of user core. Sep 16 04:26:20.366994 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 16 04:26:20.528564 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:26:20.544711 (kubelet)[1738]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:26:20.587467 kubelet[1738]: E0916 04:26:20.587411 1738 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:26:20.590275 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:26:20.590413 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:26:20.590953 systemd[1]: kubelet.service: Consumed 165ms CPU time, 105.5M memory peak. Sep 16 04:26:21.034564 sshd[1732]: Connection closed by 139.178.89.65 port 43314 Sep 16 04:26:21.035528 sshd-session[1726]: pam_unix(sshd:session): session closed for user core Sep 16 04:26:21.040678 systemd-logind[1522]: Session 2 logged out. Waiting for processes to exit. Sep 16 04:26:21.041176 systemd[1]: sshd@1-138.199.234.3:22-139.178.89.65:43314.service: Deactivated successfully. Sep 16 04:26:21.044000 systemd[1]: session-2.scope: Deactivated successfully. Sep 16 04:26:21.047136 systemd-logind[1522]: Removed session 2. Sep 16 04:26:21.207264 systemd[1]: Started sshd@2-138.199.234.3:22-139.178.89.65:38684.service - OpenSSH per-connection server daemon (139.178.89.65:38684). Sep 16 04:26:22.194196 sshd[1750]: Accepted publickey for core from 139.178.89.65 port 38684 ssh2: RSA SHA256:hnZQROmedaG+reQAaWvmG41QCRiTlF3QrQA4Qzar5jk Sep 16 04:26:22.196180 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:26:22.201952 systemd-logind[1522]: New session 3 of user core. Sep 16 04:26:22.214435 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 16 04:26:22.235739 update_engine[1529]: I20250916 04:26:22.234885 1529 update_attempter.cc:509] Updating boot flags... Sep 16 04:26:22.867843 sshd[1753]: Connection closed by 139.178.89.65 port 38684 Sep 16 04:26:22.866453 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Sep 16 04:26:22.872254 systemd[1]: sshd@2-138.199.234.3:22-139.178.89.65:38684.service: Deactivated successfully. Sep 16 04:26:22.874274 systemd[1]: session-3.scope: Deactivated successfully. Sep 16 04:26:22.878415 systemd-logind[1522]: Session 3 logged out. Waiting for processes to exit. Sep 16 04:26:22.879713 systemd-logind[1522]: Removed session 3. Sep 16 04:26:23.047127 systemd[1]: Started sshd@3-138.199.234.3:22-139.178.89.65:38698.service - OpenSSH per-connection server daemon (139.178.89.65:38698). Sep 16 04:26:24.070725 sshd[1779]: Accepted publickey for core from 139.178.89.65 port 38698 ssh2: RSA SHA256:hnZQROmedaG+reQAaWvmG41QCRiTlF3QrQA4Qzar5jk Sep 16 04:26:24.072847 sshd-session[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:26:24.078380 systemd-logind[1522]: New session 4 of user core. Sep 16 04:26:24.087075 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 16 04:26:24.762375 sshd[1782]: Connection closed by 139.178.89.65 port 38698 Sep 16 04:26:24.761168 sshd-session[1779]: pam_unix(sshd:session): session closed for user core Sep 16 04:26:24.767147 systemd-logind[1522]: Session 4 logged out. Waiting for processes to exit. Sep 16 04:26:24.767829 systemd[1]: sshd@3-138.199.234.3:22-139.178.89.65:38698.service: Deactivated successfully. Sep 16 04:26:24.772089 systemd[1]: session-4.scope: Deactivated successfully. Sep 16 04:26:24.776126 systemd-logind[1522]: Removed session 4. Sep 16 04:26:24.938988 systemd[1]: Started sshd@4-138.199.234.3:22-139.178.89.65:38714.service - OpenSSH per-connection server daemon (139.178.89.65:38714). Sep 16 04:26:25.955187 sshd[1788]: Accepted publickey for core from 139.178.89.65 port 38714 ssh2: RSA SHA256:hnZQROmedaG+reQAaWvmG41QCRiTlF3QrQA4Qzar5jk Sep 16 04:26:25.957269 sshd-session[1788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:26:25.963016 systemd-logind[1522]: New session 5 of user core. Sep 16 04:26:25.973242 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 16 04:26:26.494211 sudo[1792]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 16 04:26:26.494481 sudo[1792]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:26:26.509731 sudo[1792]: pam_unix(sudo:session): session closed for user root Sep 16 04:26:26.672172 sshd[1791]: Connection closed by 139.178.89.65 port 38714 Sep 16 04:26:26.673464 sshd-session[1788]: pam_unix(sshd:session): session closed for user core Sep 16 04:26:26.679934 systemd-logind[1522]: Session 5 logged out. Waiting for processes to exit. Sep 16 04:26:26.680727 systemd[1]: sshd@4-138.199.234.3:22-139.178.89.65:38714.service: Deactivated successfully. Sep 16 04:26:26.684322 systemd[1]: session-5.scope: Deactivated successfully. Sep 16 04:26:26.686541 systemd-logind[1522]: Removed session 5. Sep 16 04:26:26.840079 systemd[1]: Started sshd@5-138.199.234.3:22-139.178.89.65:38726.service - OpenSSH per-connection server daemon (139.178.89.65:38726). Sep 16 04:26:27.833190 sshd[1798]: Accepted publickey for core from 139.178.89.65 port 38726 ssh2: RSA SHA256:hnZQROmedaG+reQAaWvmG41QCRiTlF3QrQA4Qzar5jk Sep 16 04:26:27.835591 sshd-session[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:26:27.843037 systemd-logind[1522]: New session 6 of user core. Sep 16 04:26:27.852134 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 16 04:26:28.352596 sudo[1803]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 16 04:26:28.352973 sudo[1803]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:26:28.359322 sudo[1803]: pam_unix(sudo:session): session closed for user root Sep 16 04:26:28.368868 sudo[1802]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 16 04:26:28.369171 sudo[1802]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:26:28.382289 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 16 04:26:28.436834 augenrules[1825]: No rules Sep 16 04:26:28.439132 systemd[1]: audit-rules.service: Deactivated successfully. Sep 16 04:26:28.440954 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 16 04:26:28.444328 sudo[1802]: pam_unix(sudo:session): session closed for user root Sep 16 04:26:28.603435 sshd[1801]: Connection closed by 139.178.89.65 port 38726 Sep 16 04:26:28.604151 sshd-session[1798]: pam_unix(sshd:session): session closed for user core Sep 16 04:26:28.609010 systemd-logind[1522]: Session 6 logged out. Waiting for processes to exit. Sep 16 04:26:28.609080 systemd[1]: sshd@5-138.199.234.3:22-139.178.89.65:38726.service: Deactivated successfully. Sep 16 04:26:28.610948 systemd[1]: session-6.scope: Deactivated successfully. Sep 16 04:26:28.615078 systemd-logind[1522]: Removed session 6. Sep 16 04:26:28.777486 systemd[1]: Started sshd@6-138.199.234.3:22-139.178.89.65:38734.service - OpenSSH per-connection server daemon (139.178.89.65:38734). Sep 16 04:26:29.792511 sshd[1834]: Accepted publickey for core from 139.178.89.65 port 38734 ssh2: RSA SHA256:hnZQROmedaG+reQAaWvmG41QCRiTlF3QrQA4Qzar5jk Sep 16 04:26:29.795043 sshd-session[1834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:26:29.802927 systemd-logind[1522]: New session 7 of user core. Sep 16 04:26:29.823498 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 16 04:26:30.316056 sudo[1838]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 16 04:26:30.316331 sudo[1838]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:26:30.637184 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 16 04:26:30.639432 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 16 04:26:30.640956 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:26:30.654861 (dockerd)[1857]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 16 04:26:30.819864 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:26:30.830378 (kubelet)[1871]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:26:30.873742 kubelet[1871]: E0916 04:26:30.873665 1871 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:26:30.876900 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:26:30.877030 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:26:30.877299 systemd[1]: kubelet.service: Consumed 163ms CPU time, 105.2M memory peak. Sep 16 04:26:30.897638 dockerd[1857]: time="2025-09-16T04:26:30.897162306Z" level=info msg="Starting up" Sep 16 04:26:30.898430 dockerd[1857]: time="2025-09-16T04:26:30.898377604Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 16 04:26:30.911151 dockerd[1857]: time="2025-09-16T04:26:30.911094445Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 16 04:26:30.949074 dockerd[1857]: time="2025-09-16T04:26:30.949009959Z" level=info msg="Loading containers: start." Sep 16 04:26:30.958825 kernel: Initializing XFRM netlink socket Sep 16 04:26:31.211078 systemd-networkd[1409]: docker0: Link UP Sep 16 04:26:31.217221 dockerd[1857]: time="2025-09-16T04:26:31.217056772Z" level=info msg="Loading containers: done." Sep 16 04:26:31.232439 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3771141341-merged.mount: Deactivated successfully. Sep 16 04:26:31.236571 dockerd[1857]: time="2025-09-16T04:26:31.236492893Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 16 04:26:31.236767 dockerd[1857]: time="2025-09-16T04:26:31.236641299Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 16 04:26:31.236961 dockerd[1857]: time="2025-09-16T04:26:31.236913672Z" level=info msg="Initializing buildkit" Sep 16 04:26:31.270199 dockerd[1857]: time="2025-09-16T04:26:31.270071294Z" level=info msg="Completed buildkit initialization" Sep 16 04:26:31.279296 dockerd[1857]: time="2025-09-16T04:26:31.279223789Z" level=info msg="Daemon has completed initialization" Sep 16 04:26:31.279862 dockerd[1857]: time="2025-09-16T04:26:31.279554684Z" level=info msg="API listen on /run/docker.sock" Sep 16 04:26:31.280179 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 16 04:26:32.325563 containerd[1543]: time="2025-09-16T04:26:32.325329002Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 16 04:26:32.886356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount417548181.mount: Deactivated successfully. Sep 16 04:26:34.122382 containerd[1543]: time="2025-09-16T04:26:34.122301251Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:26:34.123766 containerd[1543]: time="2025-09-16T04:26:34.123479458Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=26363783" Sep 16 04:26:34.124681 containerd[1543]: time="2025-09-16T04:26:34.124649225Z" level=info msg="ImageCreate event name:\"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:26:34.128527 containerd[1543]: time="2025-09-16T04:26:34.128479098Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:26:34.129876 containerd[1543]: time="2025-09-16T04:26:34.129656225Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"26360284\" in 1.804253381s" Sep 16 04:26:34.129876 containerd[1543]: time="2025-09-16T04:26:34.129701107Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Sep 16 04:26:34.130425 containerd[1543]: time="2025-09-16T04:26:34.130390375Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 16 04:26:35.460794 containerd[1543]: time="2025-09-16T04:26:35.459842094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:26:35.462169 containerd[1543]: time="2025-09-16T04:26:35.462132062Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=22531220" Sep 16 04:26:35.463059 containerd[1543]: time="2025-09-16T04:26:35.463030297Z" level=info msg="ImageCreate event name:\"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:26:35.467053 containerd[1543]: time="2025-09-16T04:26:35.467008010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:26:35.473731 containerd[1543]: time="2025-09-16T04:26:35.473684388Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"24099975\" in 1.343028202s" Sep 16 04:26:35.473905 containerd[1543]: time="2025-09-16T04:26:35.473888635Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Sep 16 04:26:35.475946 containerd[1543]: time="2025-09-16T04:26:35.475493977Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 16 04:26:36.668424 containerd[1543]: time="2025-09-16T04:26:36.668324895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:26:36.670790 containerd[1543]: time="2025-09-16T04:26:36.670651582Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=17484344" Sep 16 04:26:36.672233 containerd[1543]: time="2025-09-16T04:26:36.672172918Z" level=info msg="ImageCreate event name:\"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:26:36.676044 containerd[1543]: time="2025-09-16T04:26:36.675336515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:26:36.676940 containerd[1543]: time="2025-09-16T04:26:36.676895613Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"19053117\" in 1.200814974s" Sep 16 04:26:36.676940 containerd[1543]: time="2025-09-16T04:26:36.676939055Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Sep 16 04:26:36.677482 containerd[1543]: time="2025-09-16T04:26:36.677432433Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 16 04:26:37.688352 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1653706906.mount: Deactivated successfully. Sep 16 04:26:38.023998 containerd[1543]: time="2025-09-16T04:26:38.023935046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:26:38.024956 containerd[1543]: time="2025-09-16T04:26:38.024915639Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=27417843" Sep 16 04:26:38.026319 containerd[1543]: time="2025-09-16T04:26:38.026258086Z" level=info msg="ImageCreate event name:\"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:26:38.028596 containerd[1543]: time="2025-09-16T04:26:38.028541564Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:26:38.029433 containerd[1543]: time="2025-09-16T04:26:38.029396434Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"27416836\" in 1.350816599s" Sep 16 04:26:38.029594 containerd[1543]: time="2025-09-16T04:26:38.029572240Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Sep 16 04:26:38.030332 containerd[1543]: time="2025-09-16T04:26:38.030289065Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 16 04:26:38.581943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1554502850.mount: Deactivated successfully. Sep 16 04:26:39.235022 containerd[1543]: time="2025-09-16T04:26:39.234942826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:26:39.237339 containerd[1543]: time="2025-09-16T04:26:39.236715085Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Sep 16 04:26:39.238336 containerd[1543]: time="2025-09-16T04:26:39.238288937Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:26:39.241980 containerd[1543]: time="2025-09-16T04:26:39.241932858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:26:39.243112 containerd[1543]: time="2025-09-16T04:26:39.243074616Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.21273335s" Sep 16 04:26:39.243112 containerd[1543]: time="2025-09-16T04:26:39.243110058Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 16 04:26:39.244407 containerd[1543]: time="2025-09-16T04:26:39.244374700Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 16 04:26:39.690614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3388584787.mount: Deactivated successfully. Sep 16 04:26:39.697742 containerd[1543]: time="2025-09-16T04:26:39.697648343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 04:26:39.700815 containerd[1543]: time="2025-09-16T04:26:39.700745326Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Sep 16 04:26:39.701915 containerd[1543]: time="2025-09-16T04:26:39.701851923Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 04:26:39.705346 containerd[1543]: time="2025-09-16T04:26:39.705280517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 04:26:39.707259 containerd[1543]: time="2025-09-16T04:26:39.707200301Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 462.782479ms" Sep 16 04:26:39.707405 containerd[1543]: time="2025-09-16T04:26:39.707263823Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 16 04:26:39.707908 containerd[1543]: time="2025-09-16T04:26:39.707875883Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 16 04:26:40.292345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1899916707.mount: Deactivated successfully. Sep 16 04:26:40.993180 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Sep 16 04:26:40.996519 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:26:41.154233 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:26:41.166715 (kubelet)[2270]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:26:41.217096 kubelet[2270]: E0916 04:26:41.217053 2270 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:26:41.219748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:26:41.219902 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:26:41.220435 systemd[1]: kubelet.service: Consumed 175ms CPU time, 106.7M memory peak. Sep 16 04:26:42.350783 containerd[1543]: time="2025-09-16T04:26:42.350630899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:26:42.353159 containerd[1543]: time="2025-09-16T04:26:42.353081573Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943239" Sep 16 04:26:42.353910 containerd[1543]: time="2025-09-16T04:26:42.353870557Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:26:42.361131 containerd[1543]: time="2025-09-16T04:26:42.361051773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:26:42.362368 containerd[1543]: time="2025-09-16T04:26:42.362169047Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.654249362s" Sep 16 04:26:42.362368 containerd[1543]: time="2025-09-16T04:26:42.362217768Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 16 04:26:47.558781 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:26:47.559014 systemd[1]: kubelet.service: Consumed 175ms CPU time, 106.7M memory peak. Sep 16 04:26:47.563351 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:26:47.602229 systemd[1]: Reload requested from client PID 2309 ('systemctl') (unit session-7.scope)... Sep 16 04:26:47.602391 systemd[1]: Reloading... Sep 16 04:26:47.723802 zram_generator::config[2353]: No configuration found. Sep 16 04:26:47.925879 systemd[1]: Reloading finished in 322 ms. Sep 16 04:26:47.997865 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 16 04:26:47.998247 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 16 04:26:47.998938 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:26:47.999216 systemd[1]: kubelet.service: Consumed 109ms CPU time, 95M memory peak. Sep 16 04:26:48.002261 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:26:48.162914 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:26:48.173311 (kubelet)[2401]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 16 04:26:48.218070 kubelet[2401]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:26:48.218070 kubelet[2401]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 16 04:26:48.218070 kubelet[2401]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:26:48.218070 kubelet[2401]: I0916 04:26:48.217822 2401 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 16 04:26:49.083684 kubelet[2401]: I0916 04:26:49.083633 2401 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 16 04:26:49.083894 kubelet[2401]: I0916 04:26:49.083882 2401 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 16 04:26:49.084308 kubelet[2401]: I0916 04:26:49.084288 2401 server.go:954] "Client rotation is on, will bootstrap in background" Sep 16 04:26:49.122732 kubelet[2401]: E0916 04:26:49.122318 2401 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://138.199.234.3:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 138.199.234.3:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:26:49.127363 kubelet[2401]: I0916 04:26:49.127317 2401 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 16 04:26:49.136042 kubelet[2401]: I0916 04:26:49.135925 2401 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 16 04:26:49.140192 kubelet[2401]: I0916 04:26:49.140151 2401 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 16 04:26:49.141143 kubelet[2401]: I0916 04:26:49.141061 2401 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 16 04:26:49.141440 kubelet[2401]: I0916 04:26:49.141119 2401 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-0-0-n-0223e12d7a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 16 04:26:49.141440 kubelet[2401]: I0916 04:26:49.141398 2401 topology_manager.go:138] "Creating topology manager with none policy" Sep 16 04:26:49.141440 kubelet[2401]: I0916 04:26:49.141408 2401 container_manager_linux.go:304] "Creating device plugin manager" Sep 16 04:26:49.141728 kubelet[2401]: I0916 04:26:49.141605 2401 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:26:49.145307 kubelet[2401]: I0916 04:26:49.145259 2401 kubelet.go:446] "Attempting to sync node with API server" Sep 16 04:26:49.145413 kubelet[2401]: I0916 04:26:49.145386 2401 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 16 04:26:49.145448 kubelet[2401]: I0916 04:26:49.145418 2401 kubelet.go:352] "Adding apiserver pod source" Sep 16 04:26:49.145448 kubelet[2401]: I0916 04:26:49.145431 2401 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 16 04:26:49.151808 kubelet[2401]: W0916 04:26:49.150923 2401 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://138.199.234.3:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-0-0-n-0223e12d7a&limit=500&resourceVersion=0": dial tcp 138.199.234.3:6443: connect: connection refused Sep 16 04:26:49.151808 kubelet[2401]: E0916 04:26:49.151026 2401 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://138.199.234.3:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-0-0-n-0223e12d7a&limit=500&resourceVersion=0\": dial tcp 138.199.234.3:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:26:49.151808 kubelet[2401]: I0916 04:26:49.151176 2401 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 16 04:26:49.152404 kubelet[2401]: I0916 04:26:49.152376 2401 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 16 04:26:49.152612 kubelet[2401]: W0916 04:26:49.152595 2401 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 16 04:26:49.155870 kubelet[2401]: I0916 04:26:49.155836 2401 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 16 04:26:49.156038 kubelet[2401]: I0916 04:26:49.156028 2401 server.go:1287] "Started kubelet" Sep 16 04:26:49.159912 kubelet[2401]: E0916 04:26:49.159575 2401 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://138.199.234.3:6443/api/v1/namespaces/default/events\": dial tcp 138.199.234.3:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-0-0-n-0223e12d7a.1865a8c89e038368 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-0-0-n-0223e12d7a,UID:ci-4459-0-0-n-0223e12d7a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-0-0-n-0223e12d7a,},FirstTimestamp:2025-09-16 04:26:49.156002664 +0000 UTC m=+0.976919699,LastTimestamp:2025-09-16 04:26:49.156002664 +0000 UTC m=+0.976919699,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-0-0-n-0223e12d7a,}" Sep 16 04:26:49.160093 kubelet[2401]: W0916 04:26:49.159991 2401 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://138.199.234.3:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 138.199.234.3:6443: connect: connection refused Sep 16 04:26:49.160093 kubelet[2401]: E0916 04:26:49.160039 2401 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://138.199.234.3:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 138.199.234.3:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:26:49.162484 kubelet[2401]: I0916 04:26:49.162451 2401 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 16 04:26:49.163915 kubelet[2401]: I0916 04:26:49.163831 2401 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 16 04:26:49.164199 kubelet[2401]: I0916 04:26:49.164170 2401 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 16 04:26:49.164862 kubelet[2401]: I0916 04:26:49.164767 2401 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 16 04:26:49.166608 kubelet[2401]: I0916 04:26:49.162901 2401 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 16 04:26:49.167836 kubelet[2401]: I0916 04:26:49.167684 2401 server.go:479] "Adding debug handlers to kubelet server" Sep 16 04:26:49.168150 kubelet[2401]: I0916 04:26:49.168133 2401 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 16 04:26:49.170141 kubelet[2401]: E0916 04:26:49.169094 2401 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-0-0-n-0223e12d7a\" not found" Sep 16 04:26:49.170141 kubelet[2401]: I0916 04:26:49.169966 2401 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 16 04:26:49.170141 kubelet[2401]: I0916 04:26:49.170045 2401 reconciler.go:26] "Reconciler: start to sync state" Sep 16 04:26:49.172390 kubelet[2401]: W0916 04:26:49.172343 2401 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://138.199.234.3:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.234.3:6443: connect: connection refused Sep 16 04:26:49.172573 kubelet[2401]: E0916 04:26:49.172553 2401 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://138.199.234.3:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 138.199.234.3:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:26:49.173099 kubelet[2401]: I0916 04:26:49.173071 2401 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 16 04:26:49.175345 kubelet[2401]: I0916 04:26:49.175313 2401 factory.go:221] Registration of the containerd container factory successfully Sep 16 04:26:49.175465 kubelet[2401]: I0916 04:26:49.175456 2401 factory.go:221] Registration of the systemd container factory successfully Sep 16 04:26:49.175805 kubelet[2401]: E0916 04:26:49.175782 2401 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 16 04:26:49.175967 kubelet[2401]: E0916 04:26:49.175931 2401 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.234.3:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-0-0-n-0223e12d7a?timeout=10s\": dial tcp 138.199.234.3:6443: connect: connection refused" interval="200ms" Sep 16 04:26:49.195955 kubelet[2401]: I0916 04:26:49.195909 2401 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 16 04:26:49.197531 kubelet[2401]: I0916 04:26:49.197493 2401 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 16 04:26:49.197714 kubelet[2401]: I0916 04:26:49.197702 2401 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 16 04:26:49.197829 kubelet[2401]: I0916 04:26:49.197811 2401 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 16 04:26:49.197892 kubelet[2401]: I0916 04:26:49.197882 2401 kubelet.go:2382] "Starting kubelet main sync loop" Sep 16 04:26:49.197995 kubelet[2401]: E0916 04:26:49.197974 2401 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 16 04:26:49.206461 kubelet[2401]: W0916 04:26:49.206413 2401 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://138.199.234.3:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.234.3:6443: connect: connection refused Sep 16 04:26:49.206461 kubelet[2401]: E0916 04:26:49.206461 2401 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://138.199.234.3:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 138.199.234.3:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:26:49.208724 kubelet[2401]: I0916 04:26:49.208677 2401 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 16 04:26:49.208724 kubelet[2401]: I0916 04:26:49.208723 2401 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 16 04:26:49.208954 kubelet[2401]: I0916 04:26:49.208742 2401 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:26:49.210437 kubelet[2401]: I0916 04:26:49.210416 2401 policy_none.go:49] "None policy: Start" Sep 16 04:26:49.210437 kubelet[2401]: I0916 04:26:49.210438 2401 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 16 04:26:49.210525 kubelet[2401]: I0916 04:26:49.210453 2401 state_mem.go:35] "Initializing new in-memory state store" Sep 16 04:26:49.216254 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 16 04:26:49.228736 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 16 04:26:49.232569 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 16 04:26:49.245920 kubelet[2401]: I0916 04:26:49.245877 2401 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 16 04:26:49.246336 kubelet[2401]: I0916 04:26:49.246250 2401 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 16 04:26:49.246336 kubelet[2401]: I0916 04:26:49.246272 2401 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 16 04:26:49.248904 kubelet[2401]: I0916 04:26:49.248566 2401 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 16 04:26:49.251007 kubelet[2401]: E0916 04:26:49.250904 2401 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 16 04:26:49.251117 kubelet[2401]: E0916 04:26:49.251054 2401 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-0-0-n-0223e12d7a\" not found" Sep 16 04:26:49.313032 systemd[1]: Created slice kubepods-burstable-podf5ed49a9e0a52c7da4f704e90d0d6872.slice - libcontainer container kubepods-burstable-podf5ed49a9e0a52c7da4f704e90d0d6872.slice. Sep 16 04:26:49.334198 kubelet[2401]: E0916 04:26:49.333944 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-0-0-n-0223e12d7a\" not found" node="ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:49.340168 systemd[1]: Created slice kubepods-burstable-pod0696445330936c2820867f3af7940c43.slice - libcontainer container kubepods-burstable-pod0696445330936c2820867f3af7940c43.slice. Sep 16 04:26:49.350934 kubelet[2401]: E0916 04:26:49.350802 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-0-0-n-0223e12d7a\" not found" node="ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:49.352176 kubelet[2401]: I0916 04:26:49.351489 2401 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:49.352176 kubelet[2401]: E0916 04:26:49.351983 2401 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://138.199.234.3:6443/api/v1/nodes\": dial tcp 138.199.234.3:6443: connect: connection refused" node="ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:49.354572 systemd[1]: Created slice kubepods-burstable-pod02f3c062cb93cda0b4e90a194d6912ad.slice - libcontainer container kubepods-burstable-pod02f3c062cb93cda0b4e90a194d6912ad.slice. Sep 16 04:26:49.357376 kubelet[2401]: E0916 04:26:49.357314 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-0-0-n-0223e12d7a\" not found" node="ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:49.371097 kubelet[2401]: I0916 04:26:49.370966 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0696445330936c2820867f3af7940c43-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-0-0-n-0223e12d7a\" (UID: \"0696445330936c2820867f3af7940c43\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:49.371383 kubelet[2401]: I0916 04:26:49.371069 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0696445330936c2820867f3af7940c43-k8s-certs\") pod \"kube-controller-manager-ci-4459-0-0-n-0223e12d7a\" (UID: \"0696445330936c2820867f3af7940c43\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:49.371558 kubelet[2401]: I0916 04:26:49.371489 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0696445330936c2820867f3af7940c43-kubeconfig\") pod \"kube-controller-manager-ci-4459-0-0-n-0223e12d7a\" (UID: \"0696445330936c2820867f3af7940c43\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:49.371782 kubelet[2401]: I0916 04:26:49.371708 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0696445330936c2820867f3af7940c43-ca-certs\") pod \"kube-controller-manager-ci-4459-0-0-n-0223e12d7a\" (UID: \"0696445330936c2820867f3af7940c43\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:49.371949 kubelet[2401]: I0916 04:26:49.371922 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f5ed49a9e0a52c7da4f704e90d0d6872-k8s-certs\") pod \"kube-apiserver-ci-4459-0-0-n-0223e12d7a\" (UID: \"f5ed49a9e0a52c7da4f704e90d0d6872\") " pod="kube-system/kube-apiserver-ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:49.372176 kubelet[2401]: I0916 04:26:49.372147 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f5ed49a9e0a52c7da4f704e90d0d6872-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-0-0-n-0223e12d7a\" (UID: \"f5ed49a9e0a52c7da4f704e90d0d6872\") " pod="kube-system/kube-apiserver-ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:49.372351 kubelet[2401]: I0916 04:26:49.372311 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0696445330936c2820867f3af7940c43-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-0-0-n-0223e12d7a\" (UID: \"0696445330936c2820867f3af7940c43\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:49.372552 kubelet[2401]: I0916 04:26:49.372494 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/02f3c062cb93cda0b4e90a194d6912ad-kubeconfig\") pod \"kube-scheduler-ci-4459-0-0-n-0223e12d7a\" (UID: \"02f3c062cb93cda0b4e90a194d6912ad\") " pod="kube-system/kube-scheduler-ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:49.372859 kubelet[2401]: I0916 04:26:49.372706 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f5ed49a9e0a52c7da4f704e90d0d6872-ca-certs\") pod \"kube-apiserver-ci-4459-0-0-n-0223e12d7a\" (UID: \"f5ed49a9e0a52c7da4f704e90d0d6872\") " pod="kube-system/kube-apiserver-ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:49.376654 kubelet[2401]: E0916 04:26:49.376598 2401 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.234.3:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-0-0-n-0223e12d7a?timeout=10s\": dial tcp 138.199.234.3:6443: connect: connection refused" interval="400ms" Sep 16 04:26:49.554823 kubelet[2401]: I0916 04:26:49.554722 2401 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:49.555730 kubelet[2401]: E0916 04:26:49.555655 2401 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://138.199.234.3:6443/api/v1/nodes\": dial tcp 138.199.234.3:6443: connect: connection refused" node="ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:49.636619 containerd[1543]: time="2025-09-16T04:26:49.635867884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-0-0-n-0223e12d7a,Uid:f5ed49a9e0a52c7da4f704e90d0d6872,Namespace:kube-system,Attempt:0,}" Sep 16 04:26:49.652774 containerd[1543]: time="2025-09-16T04:26:49.652408855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-0-0-n-0223e12d7a,Uid:0696445330936c2820867f3af7940c43,Namespace:kube-system,Attempt:0,}" Sep 16 04:26:49.665150 containerd[1543]: time="2025-09-16T04:26:49.665112490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-0-0-n-0223e12d7a,Uid:02f3c062cb93cda0b4e90a194d6912ad,Namespace:kube-system,Attempt:0,}" Sep 16 04:26:49.666183 containerd[1543]: time="2025-09-16T04:26:49.666069313Z" level=info msg="connecting to shim 4c30d8a6e1fbf9d49109d8c5ce2bfaf5b96b42278851be860ed7b5d3037b7996" address="unix:///run/containerd/s/ae7bbf45039f15964b2f478f7a66b11eb4ed87656389ad423205716ca9ed4eab" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:26:49.690199 systemd[1]: Started cri-containerd-4c30d8a6e1fbf9d49109d8c5ce2bfaf5b96b42278851be860ed7b5d3037b7996.scope - libcontainer container 4c30d8a6e1fbf9d49109d8c5ce2bfaf5b96b42278851be860ed7b5d3037b7996. Sep 16 04:26:49.691009 containerd[1543]: time="2025-09-16T04:26:49.689604777Z" level=info msg="connecting to shim 02051610f4ac87a2e2efbd4b2636d0cbb0b8e16079321eb3ce0d0fff8eb3dade" address="unix:///run/containerd/s/147718f68acdbea20fdc0cbb02a59993fc0afee35572d88f02dd2d8c4f17fe65" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:26:49.719166 containerd[1543]: time="2025-09-16T04:26:49.719108709Z" level=info msg="connecting to shim b51acdd0dff02191d732dd9b6f680847f9ffa75898555e8085688b10b86676eb" address="unix:///run/containerd/s/83b12eda4eeb5c0dfc6a980fb8982090edb9b3d0e2c37b79f3eeebfce4ff008a" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:26:49.731997 systemd[1]: Started cri-containerd-02051610f4ac87a2e2efbd4b2636d0cbb0b8e16079321eb3ce0d0fff8eb3dade.scope - libcontainer container 02051610f4ac87a2e2efbd4b2636d0cbb0b8e16079321eb3ce0d0fff8eb3dade. Sep 16 04:26:49.758987 systemd[1]: Started cri-containerd-b51acdd0dff02191d732dd9b6f680847f9ffa75898555e8085688b10b86676eb.scope - libcontainer container b51acdd0dff02191d732dd9b6f680847f9ffa75898555e8085688b10b86676eb. Sep 16 04:26:49.766570 containerd[1543]: time="2025-09-16T04:26:49.766475843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-0-0-n-0223e12d7a,Uid:f5ed49a9e0a52c7da4f704e90d0d6872,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c30d8a6e1fbf9d49109d8c5ce2bfaf5b96b42278851be860ed7b5d3037b7996\"" Sep 16 04:26:49.773302 containerd[1543]: time="2025-09-16T04:26:49.773255411Z" level=info msg="CreateContainer within sandbox \"4c30d8a6e1fbf9d49109d8c5ce2bfaf5b96b42278851be860ed7b5d3037b7996\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 16 04:26:49.778782 kubelet[2401]: E0916 04:26:49.778676 2401 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.234.3:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-0-0-n-0223e12d7a?timeout=10s\": dial tcp 138.199.234.3:6443: connect: connection refused" interval="800ms" Sep 16 04:26:49.790165 containerd[1543]: time="2025-09-16T04:26:49.790127510Z" level=info msg="Container fba93847edc8491e29f0677b0929edc49e5f71f24a489cd72e60a69daa01edab: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:26:49.792309 containerd[1543]: time="2025-09-16T04:26:49.792194201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-0-0-n-0223e12d7a,Uid:0696445330936c2820867f3af7940c43,Namespace:kube-system,Attempt:0,} returns sandbox id \"02051610f4ac87a2e2efbd4b2636d0cbb0b8e16079321eb3ce0d0fff8eb3dade\"" Sep 16 04:26:49.797325 containerd[1543]: time="2025-09-16T04:26:49.797289647Z" level=info msg="CreateContainer within sandbox \"02051610f4ac87a2e2efbd4b2636d0cbb0b8e16079321eb3ce0d0fff8eb3dade\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 16 04:26:49.800310 containerd[1543]: time="2025-09-16T04:26:49.800099797Z" level=info msg="CreateContainer within sandbox \"4c30d8a6e1fbf9d49109d8c5ce2bfaf5b96b42278851be860ed7b5d3037b7996\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fba93847edc8491e29f0677b0929edc49e5f71f24a489cd72e60a69daa01edab\"" Sep 16 04:26:49.801020 containerd[1543]: time="2025-09-16T04:26:49.800992859Z" level=info msg="StartContainer for \"fba93847edc8491e29f0677b0929edc49e5f71f24a489cd72e60a69daa01edab\"" Sep 16 04:26:49.802790 containerd[1543]: time="2025-09-16T04:26:49.802735782Z" level=info msg="connecting to shim fba93847edc8491e29f0677b0929edc49e5f71f24a489cd72e60a69daa01edab" address="unix:///run/containerd/s/ae7bbf45039f15964b2f478f7a66b11eb4ed87656389ad423205716ca9ed4eab" protocol=ttrpc version=3 Sep 16 04:26:49.812831 containerd[1543]: time="2025-09-16T04:26:49.812781232Z" level=info msg="Container ddd38f9c371834bb52febe0b3f6968785b5bf98d043071fe31d913f337aab433: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:26:49.832463 containerd[1543]: time="2025-09-16T04:26:49.832398758Z" level=info msg="CreateContainer within sandbox \"02051610f4ac87a2e2efbd4b2636d0cbb0b8e16079321eb3ce0d0fff8eb3dade\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ddd38f9c371834bb52febe0b3f6968785b5bf98d043071fe31d913f337aab433\"" Sep 16 04:26:49.833696 containerd[1543]: time="2025-09-16T04:26:49.833377862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-0-0-n-0223e12d7a,Uid:02f3c062cb93cda0b4e90a194d6912ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"b51acdd0dff02191d732dd9b6f680847f9ffa75898555e8085688b10b86676eb\"" Sep 16 04:26:49.834925 containerd[1543]: time="2025-09-16T04:26:49.834801298Z" level=info msg="StartContainer for \"ddd38f9c371834bb52febe0b3f6968785b5bf98d043071fe31d913f337aab433\"" Sep 16 04:26:49.836009 systemd[1]: Started cri-containerd-fba93847edc8491e29f0677b0929edc49e5f71f24a489cd72e60a69daa01edab.scope - libcontainer container fba93847edc8491e29f0677b0929edc49e5f71f24a489cd72e60a69daa01edab. Sep 16 04:26:49.838684 containerd[1543]: time="2025-09-16T04:26:49.838604112Z" level=info msg="connecting to shim ddd38f9c371834bb52febe0b3f6968785b5bf98d043071fe31d913f337aab433" address="unix:///run/containerd/s/147718f68acdbea20fdc0cbb02a59993fc0afee35572d88f02dd2d8c4f17fe65" protocol=ttrpc version=3 Sep 16 04:26:49.839265 containerd[1543]: time="2025-09-16T04:26:49.839229767Z" level=info msg="CreateContainer within sandbox \"b51acdd0dff02191d732dd9b6f680847f9ffa75898555e8085688b10b86676eb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 16 04:26:49.853319 containerd[1543]: time="2025-09-16T04:26:49.853278476Z" level=info msg="Container f74fa7b180df69b6b60f88fb2f39ff028f534ca5febeb2d7cf3301d330702b55: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:26:49.870961 containerd[1543]: time="2025-09-16T04:26:49.870793430Z" level=info msg="CreateContainer within sandbox \"b51acdd0dff02191d732dd9b6f680847f9ffa75898555e8085688b10b86676eb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f74fa7b180df69b6b60f88fb2f39ff028f534ca5febeb2d7cf3301d330702b55\"" Sep 16 04:26:49.871774 containerd[1543]: time="2025-09-16T04:26:49.871490447Z" level=info msg="StartContainer for \"f74fa7b180df69b6b60f88fb2f39ff028f534ca5febeb2d7cf3301d330702b55\"" Sep 16 04:26:49.872027 systemd[1]: Started cri-containerd-ddd38f9c371834bb52febe0b3f6968785b5bf98d043071fe31d913f337aab433.scope - libcontainer container ddd38f9c371834bb52febe0b3f6968785b5bf98d043071fe31d913f337aab433. Sep 16 04:26:49.875686 containerd[1543]: time="2025-09-16T04:26:49.875622790Z" level=info msg="connecting to shim f74fa7b180df69b6b60f88fb2f39ff028f534ca5febeb2d7cf3301d330702b55" address="unix:///run/containerd/s/83b12eda4eeb5c0dfc6a980fb8982090edb9b3d0e2c37b79f3eeebfce4ff008a" protocol=ttrpc version=3 Sep 16 04:26:49.908341 containerd[1543]: time="2025-09-16T04:26:49.908223198Z" level=info msg="StartContainer for \"fba93847edc8491e29f0677b0929edc49e5f71f24a489cd72e60a69daa01edab\" returns successfully" Sep 16 04:26:49.932027 systemd[1]: Started cri-containerd-f74fa7b180df69b6b60f88fb2f39ff028f534ca5febeb2d7cf3301d330702b55.scope - libcontainer container f74fa7b180df69b6b60f88fb2f39ff028f534ca5febeb2d7cf3301d330702b55. Sep 16 04:26:49.948786 containerd[1543]: time="2025-09-16T04:26:49.948503397Z" level=info msg="StartContainer for \"ddd38f9c371834bb52febe0b3f6968785b5bf98d043071fe31d913f337aab433\" returns successfully" Sep 16 04:26:49.958416 kubelet[2401]: I0916 04:26:49.958387 2401 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:49.959020 kubelet[2401]: E0916 04:26:49.958981 2401 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://138.199.234.3:6443/api/v1/nodes\": dial tcp 138.199.234.3:6443: connect: connection refused" node="ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:49.995865 containerd[1543]: time="2025-09-16T04:26:49.995808250Z" level=info msg="StartContainer for \"f74fa7b180df69b6b60f88fb2f39ff028f534ca5febeb2d7cf3301d330702b55\" returns successfully" Sep 16 04:26:50.218717 kubelet[2401]: E0916 04:26:50.218402 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-0-0-n-0223e12d7a\" not found" node="ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:50.224934 kubelet[2401]: E0916 04:26:50.224897 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-0-0-n-0223e12d7a\" not found" node="ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:50.227360 kubelet[2401]: E0916 04:26:50.227327 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-0-0-n-0223e12d7a\" not found" node="ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:50.762496 kubelet[2401]: I0916 04:26:50.762454 2401 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:51.228363 kubelet[2401]: E0916 04:26:51.228261 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-0-0-n-0223e12d7a\" not found" node="ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:51.230052 kubelet[2401]: E0916 04:26:51.230023 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-0-0-n-0223e12d7a\" not found" node="ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:52.453636 kubelet[2401]: E0916 04:26:52.453587 2401 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459-0-0-n-0223e12d7a\" not found" node="ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:52.518536 kubelet[2401]: I0916 04:26:52.518226 2401 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:52.518536 kubelet[2401]: E0916 04:26:52.518266 2401 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4459-0-0-n-0223e12d7a\": node \"ci-4459-0-0-n-0223e12d7a\" not found" Sep 16 04:26:52.570732 kubelet[2401]: I0916 04:26:52.570202 2401 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:52.580151 kubelet[2401]: E0916 04:26:52.580073 2401 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-0-0-n-0223e12d7a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:52.580151 kubelet[2401]: I0916 04:26:52.580101 2401 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:52.584114 kubelet[2401]: E0916 04:26:52.584043 2401 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-0-0-n-0223e12d7a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:52.584114 kubelet[2401]: I0916 04:26:52.584075 2401 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:52.587092 kubelet[2401]: E0916 04:26:52.587061 2401 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-0-0-n-0223e12d7a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:52.807779 kubelet[2401]: I0916 04:26:52.805631 2401 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:52.810451 kubelet[2401]: E0916 04:26:52.810251 2401 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-0-0-n-0223e12d7a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:52.963779 kubelet[2401]: I0916 04:26:52.963365 2401 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:52.971728 kubelet[2401]: E0916 04:26:52.971453 2401 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-0-0-n-0223e12d7a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:53.160704 kubelet[2401]: I0916 04:26:53.160554 2401 apiserver.go:52] "Watching apiserver" Sep 16 04:26:53.170819 kubelet[2401]: I0916 04:26:53.170775 2401 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 16 04:26:54.627577 systemd[1]: Reload requested from client PID 2670 ('systemctl') (unit session-7.scope)... Sep 16 04:26:54.627598 systemd[1]: Reloading... Sep 16 04:26:54.719806 zram_generator::config[2713]: No configuration found. Sep 16 04:26:54.934677 systemd[1]: Reloading finished in 306 ms. Sep 16 04:26:54.971252 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:26:54.984520 systemd[1]: kubelet.service: Deactivated successfully. Sep 16 04:26:54.985860 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:26:54.985963 systemd[1]: kubelet.service: Consumed 1.430s CPU time, 127.6M memory peak. Sep 16 04:26:54.988699 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:26:55.156743 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:26:55.171421 (kubelet)[2759]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 16 04:26:55.236308 kubelet[2759]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:26:55.236308 kubelet[2759]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 16 04:26:55.236308 kubelet[2759]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:26:55.237247 kubelet[2759]: I0916 04:26:55.237180 2759 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 16 04:26:55.246351 kubelet[2759]: I0916 04:26:55.246312 2759 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 16 04:26:55.246551 kubelet[2759]: I0916 04:26:55.246536 2759 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 16 04:26:55.247084 kubelet[2759]: I0916 04:26:55.247061 2759 server.go:954] "Client rotation is on, will bootstrap in background" Sep 16 04:26:55.249075 kubelet[2759]: I0916 04:26:55.249052 2759 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 16 04:26:55.253313 kubelet[2759]: I0916 04:26:55.253268 2759 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 16 04:26:55.258879 kubelet[2759]: I0916 04:26:55.258854 2759 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 16 04:26:55.264109 kubelet[2759]: I0916 04:26:55.264030 2759 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 16 04:26:55.264909 kubelet[2759]: I0916 04:26:55.264861 2759 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 16 04:26:55.266770 kubelet[2759]: I0916 04:26:55.264898 2759 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-0-0-n-0223e12d7a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 16 04:26:55.266770 kubelet[2759]: I0916 04:26:55.265102 2759 topology_manager.go:138] "Creating topology manager with none policy" Sep 16 04:26:55.266770 kubelet[2759]: I0916 04:26:55.265111 2759 container_manager_linux.go:304] "Creating device plugin manager" Sep 16 04:26:55.266770 kubelet[2759]: I0916 04:26:55.265166 2759 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:26:55.266770 kubelet[2759]: I0916 04:26:55.265322 2759 kubelet.go:446] "Attempting to sync node with API server" Sep 16 04:26:55.266960 kubelet[2759]: I0916 04:26:55.265334 2759 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 16 04:26:55.266960 kubelet[2759]: I0916 04:26:55.265362 2759 kubelet.go:352] "Adding apiserver pod source" Sep 16 04:26:55.266960 kubelet[2759]: I0916 04:26:55.265375 2759 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 16 04:26:55.269674 kubelet[2759]: I0916 04:26:55.269636 2759 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 16 04:26:55.270554 kubelet[2759]: I0916 04:26:55.270519 2759 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 16 04:26:55.272400 kubelet[2759]: I0916 04:26:55.272355 2759 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 16 04:26:55.272400 kubelet[2759]: I0916 04:26:55.272400 2759 server.go:1287] "Started kubelet" Sep 16 04:26:55.273865 kubelet[2759]: I0916 04:26:55.272740 2759 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 16 04:26:55.275732 kubelet[2759]: I0916 04:26:55.275702 2759 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 16 04:26:55.275953 kubelet[2759]: I0916 04:26:55.272892 2759 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 16 04:26:55.277513 kubelet[2759]: I0916 04:26:55.277487 2759 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 16 04:26:55.287508 kubelet[2759]: I0916 04:26:55.287456 2759 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 16 04:26:55.288541 kubelet[2759]: I0916 04:26:55.277986 2759 server.go:479] "Adding debug handlers to kubelet server" Sep 16 04:26:55.293580 kubelet[2759]: I0916 04:26:55.289912 2759 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 16 04:26:55.294611 kubelet[2759]: I0916 04:26:55.289923 2759 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 16 04:26:55.294810 kubelet[2759]: E0916 04:26:55.290079 2759 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-0-0-n-0223e12d7a\" not found" Sep 16 04:26:55.296999 kubelet[2759]: I0916 04:26:55.296938 2759 reconciler.go:26] "Reconciler: start to sync state" Sep 16 04:26:55.310050 kubelet[2759]: I0916 04:26:55.309726 2759 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 16 04:26:55.311777 kubelet[2759]: I0916 04:26:55.311730 2759 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 16 04:26:55.312279 kubelet[2759]: I0916 04:26:55.311921 2759 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 16 04:26:55.312279 kubelet[2759]: I0916 04:26:55.311947 2759 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 16 04:26:55.312279 kubelet[2759]: I0916 04:26:55.311955 2759 kubelet.go:2382] "Starting kubelet main sync loop" Sep 16 04:26:55.312279 kubelet[2759]: E0916 04:26:55.311996 2759 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 16 04:26:55.315648 kubelet[2759]: E0916 04:26:55.315610 2759 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 16 04:26:55.318327 kubelet[2759]: I0916 04:26:55.317835 2759 factory.go:221] Registration of the containerd container factory successfully Sep 16 04:26:55.318327 kubelet[2759]: I0916 04:26:55.317863 2759 factory.go:221] Registration of the systemd container factory successfully Sep 16 04:26:55.318327 kubelet[2759]: I0916 04:26:55.317976 2759 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 16 04:26:55.376103 kubelet[2759]: I0916 04:26:55.376073 2759 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 16 04:26:55.376349 kubelet[2759]: I0916 04:26:55.376312 2759 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 16 04:26:55.376417 kubelet[2759]: I0916 04:26:55.376408 2759 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:26:55.376646 kubelet[2759]: I0916 04:26:55.376627 2759 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 16 04:26:55.376807 kubelet[2759]: I0916 04:26:55.376774 2759 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 16 04:26:55.376905 kubelet[2759]: I0916 04:26:55.376877 2759 policy_none.go:49] "None policy: Start" Sep 16 04:26:55.376960 kubelet[2759]: I0916 04:26:55.376951 2759 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 16 04:26:55.377008 kubelet[2759]: I0916 04:26:55.377000 2759 state_mem.go:35] "Initializing new in-memory state store" Sep 16 04:26:55.377175 kubelet[2759]: I0916 04:26:55.377162 2759 state_mem.go:75] "Updated machine memory state" Sep 16 04:26:55.382223 kubelet[2759]: I0916 04:26:55.382198 2759 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 16 04:26:55.382565 kubelet[2759]: I0916 04:26:55.382548 2759 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 16 04:26:55.382731 kubelet[2759]: I0916 04:26:55.382691 2759 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 16 04:26:55.383223 kubelet[2759]: I0916 04:26:55.383185 2759 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 16 04:26:55.386352 kubelet[2759]: E0916 04:26:55.386291 2759 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 16 04:26:55.413788 kubelet[2759]: I0916 04:26:55.412745 2759 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:55.413788 kubelet[2759]: I0916 04:26:55.413345 2759 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:55.414076 kubelet[2759]: I0916 04:26:55.414053 2759 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:55.488257 kubelet[2759]: I0916 04:26:55.488167 2759 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:55.499343 kubelet[2759]: I0916 04:26:55.498767 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f5ed49a9e0a52c7da4f704e90d0d6872-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-0-0-n-0223e12d7a\" (UID: \"f5ed49a9e0a52c7da4f704e90d0d6872\") " pod="kube-system/kube-apiserver-ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:55.499812 kubelet[2759]: I0916 04:26:55.499743 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0696445330936c2820867f3af7940c43-ca-certs\") pod \"kube-controller-manager-ci-4459-0-0-n-0223e12d7a\" (UID: \"0696445330936c2820867f3af7940c43\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:55.499992 kubelet[2759]: I0916 04:26:55.499899 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0696445330936c2820867f3af7940c43-kubeconfig\") pod \"kube-controller-manager-ci-4459-0-0-n-0223e12d7a\" (UID: \"0696445330936c2820867f3af7940c43\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:55.500579 kubelet[2759]: I0916 04:26:55.500228 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0696445330936c2820867f3af7940c43-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-0-0-n-0223e12d7a\" (UID: \"0696445330936c2820867f3af7940c43\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:55.500579 kubelet[2759]: I0916 04:26:55.500415 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/02f3c062cb93cda0b4e90a194d6912ad-kubeconfig\") pod \"kube-scheduler-ci-4459-0-0-n-0223e12d7a\" (UID: \"02f3c062cb93cda0b4e90a194d6912ad\") " pod="kube-system/kube-scheduler-ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:55.500852 kubelet[2759]: I0916 04:26:55.500602 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f5ed49a9e0a52c7da4f704e90d0d6872-k8s-certs\") pod \"kube-apiserver-ci-4459-0-0-n-0223e12d7a\" (UID: \"f5ed49a9e0a52c7da4f704e90d0d6872\") " pod="kube-system/kube-apiserver-ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:55.500852 kubelet[2759]: I0916 04:26:55.500798 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0696445330936c2820867f3af7940c43-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-0-0-n-0223e12d7a\" (UID: \"0696445330936c2820867f3af7940c43\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:55.501182 kubelet[2759]: I0916 04:26:55.501000 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0696445330936c2820867f3af7940c43-k8s-certs\") pod \"kube-controller-manager-ci-4459-0-0-n-0223e12d7a\" (UID: \"0696445330936c2820867f3af7940c43\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:55.501291 kubelet[2759]: I0916 04:26:55.501208 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f5ed49a9e0a52c7da4f704e90d0d6872-ca-certs\") pod \"kube-apiserver-ci-4459-0-0-n-0223e12d7a\" (UID: \"f5ed49a9e0a52c7da4f704e90d0d6872\") " pod="kube-system/kube-apiserver-ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:55.502426 kubelet[2759]: I0916 04:26:55.502273 2759 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:55.502546 kubelet[2759]: I0916 04:26:55.502512 2759 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-0-0-n-0223e12d7a" Sep 16 04:26:55.627573 sudo[2791]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 16 04:26:55.628304 sudo[2791]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 16 04:26:55.954129 sudo[2791]: pam_unix(sudo:session): session closed for user root Sep 16 04:26:56.266597 kubelet[2759]: I0916 04:26:56.266540 2759 apiserver.go:52] "Watching apiserver" Sep 16 04:26:56.294970 kubelet[2759]: I0916 04:26:56.294902 2759 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 16 04:26:56.400162 kubelet[2759]: I0916 04:26:56.399180 2759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-0-0-n-0223e12d7a" podStartSLOduration=1.39916046 podStartE2EDuration="1.39916046s" podCreationTimestamp="2025-09-16 04:26:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:26:56.398609689 +0000 UTC m=+1.217727746" watchObservedRunningTime="2025-09-16 04:26:56.39916046 +0000 UTC m=+1.218278517" Sep 16 04:26:56.400564 kubelet[2759]: I0916 04:26:56.400458 2759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-0-0-n-0223e12d7a" podStartSLOduration=1.400445968 podStartE2EDuration="1.400445968s" podCreationTimestamp="2025-09-16 04:26:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:26:56.382569665 +0000 UTC m=+1.201687722" watchObservedRunningTime="2025-09-16 04:26:56.400445968 +0000 UTC m=+1.219564065" Sep 16 04:26:56.427489 kubelet[2759]: I0916 04:26:56.426768 2759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-0-0-n-0223e12d7a" podStartSLOduration=1.42673045 podStartE2EDuration="1.42673045s" podCreationTimestamp="2025-09-16 04:26:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:26:56.412776872 +0000 UTC m=+1.231894929" watchObservedRunningTime="2025-09-16 04:26:56.42673045 +0000 UTC m=+1.245848587" Sep 16 04:26:57.890316 sudo[1838]: pam_unix(sudo:session): session closed for user root Sep 16 04:26:58.050246 sshd[1837]: Connection closed by 139.178.89.65 port 38734 Sep 16 04:26:58.052024 sshd-session[1834]: pam_unix(sshd:session): session closed for user core Sep 16 04:26:58.056982 systemd[1]: sshd@6-138.199.234.3:22-139.178.89.65:38734.service: Deactivated successfully. Sep 16 04:26:58.060704 systemd[1]: session-7.scope: Deactivated successfully. Sep 16 04:26:58.061247 systemd[1]: session-7.scope: Consumed 7.180s CPU time, 261.4M memory peak. Sep 16 04:26:58.063007 systemd-logind[1522]: Session 7 logged out. Waiting for processes to exit. Sep 16 04:26:58.067079 systemd-logind[1522]: Removed session 7. Sep 16 04:27:01.572261 kubelet[2759]: I0916 04:27:01.572209 2759 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 16 04:27:01.573227 containerd[1543]: time="2025-09-16T04:27:01.573066188Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 16 04:27:01.573678 kubelet[2759]: I0916 04:27:01.573458 2759 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 16 04:27:02.439443 kubelet[2759]: I0916 04:27:02.438945 2759 status_manager.go:890] "Failed to get status for pod" podUID="7cf423e6-0cad-4034-adae-a457d1c5fdcf" pod="kube-system/kube-proxy-ptb9p" err="pods \"kube-proxy-ptb9p\" is forbidden: User \"system:node:ci-4459-0-0-n-0223e12d7a\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459-0-0-n-0223e12d7a' and this object" Sep 16 04:27:02.441390 systemd[1]: Created slice kubepods-besteffort-pod7cf423e6_0cad_4034_adae_a457d1c5fdcf.slice - libcontainer container kubepods-besteffort-pod7cf423e6_0cad_4034_adae_a457d1c5fdcf.slice. Sep 16 04:27:02.452459 kubelet[2759]: W0916 04:27:02.451871 2759 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4459-0-0-n-0223e12d7a" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4459-0-0-n-0223e12d7a' and this object Sep 16 04:27:02.452459 kubelet[2759]: E0916 04:27:02.451918 2759 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-4459-0-0-n-0223e12d7a\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459-0-0-n-0223e12d7a' and this object" logger="UnhandledError" Sep 16 04:27:02.452459 kubelet[2759]: W0916 04:27:02.452366 2759 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4459-0-0-n-0223e12d7a" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4459-0-0-n-0223e12d7a' and this object Sep 16 04:27:02.452459 kubelet[2759]: E0916 04:27:02.452400 2759 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4459-0-0-n-0223e12d7a\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459-0-0-n-0223e12d7a' and this object" logger="UnhandledError" Sep 16 04:27:02.452638 kubelet[2759]: W0916 04:27:02.452478 2759 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4459-0-0-n-0223e12d7a" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4459-0-0-n-0223e12d7a' and this object Sep 16 04:27:02.452638 kubelet[2759]: E0916 04:27:02.452492 2759 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-4459-0-0-n-0223e12d7a\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459-0-0-n-0223e12d7a' and this object" logger="UnhandledError" Sep 16 04:27:02.452638 kubelet[2759]: I0916 04:27:02.452535 2759 status_manager.go:890] "Failed to get status for pod" podUID="22b20782-77b0-43d9-a5ec-de471c3bdf2a" pod="kube-system/cilium-ck9hw" err="pods \"cilium-ck9hw\" is forbidden: User \"system:node:ci-4459-0-0-n-0223e12d7a\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459-0-0-n-0223e12d7a' and this object" Sep 16 04:27:02.459458 systemd[1]: Created slice kubepods-burstable-pod22b20782_77b0_43d9_a5ec_de471c3bdf2a.slice - libcontainer container kubepods-burstable-pod22b20782_77b0_43d9_a5ec_de471c3bdf2a.slice. Sep 16 04:27:02.551307 kubelet[2759]: I0916 04:27:02.551238 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7cf423e6-0cad-4034-adae-a457d1c5fdcf-kube-proxy\") pod \"kube-proxy-ptb9p\" (UID: \"7cf423e6-0cad-4034-adae-a457d1c5fdcf\") " pod="kube-system/kube-proxy-ptb9p" Sep 16 04:27:02.551519 kubelet[2759]: I0916 04:27:02.551348 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/22b20782-77b0-43d9-a5ec-de471c3bdf2a-cni-path\") pod \"cilium-ck9hw\" (UID: \"22b20782-77b0-43d9-a5ec-de471c3bdf2a\") " pod="kube-system/cilium-ck9hw" Sep 16 04:27:02.551519 kubelet[2759]: I0916 04:27:02.551370 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhxjr\" (UniqueName: \"kubernetes.io/projected/22b20782-77b0-43d9-a5ec-de471c3bdf2a-kube-api-access-hhxjr\") pod \"cilium-ck9hw\" (UID: \"22b20782-77b0-43d9-a5ec-de471c3bdf2a\") " pod="kube-system/cilium-ck9hw" Sep 16 04:27:02.551769 kubelet[2759]: I0916 04:27:02.551389 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7cf423e6-0cad-4034-adae-a457d1c5fdcf-xtables-lock\") pod \"kube-proxy-ptb9p\" (UID: \"7cf423e6-0cad-4034-adae-a457d1c5fdcf\") " pod="kube-system/kube-proxy-ptb9p" Sep 16 04:27:02.551769 kubelet[2759]: I0916 04:27:02.551646 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22b20782-77b0-43d9-a5ec-de471c3bdf2a-lib-modules\") pod \"cilium-ck9hw\" (UID: \"22b20782-77b0-43d9-a5ec-de471c3bdf2a\") " pod="kube-system/cilium-ck9hw" Sep 16 04:27:02.551769 kubelet[2759]: I0916 04:27:02.551672 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/22b20782-77b0-43d9-a5ec-de471c3bdf2a-cilium-cgroup\") pod \"cilium-ck9hw\" (UID: \"22b20782-77b0-43d9-a5ec-de471c3bdf2a\") " pod="kube-system/cilium-ck9hw" Sep 16 04:27:02.551769 kubelet[2759]: I0916 04:27:02.551708 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/22b20782-77b0-43d9-a5ec-de471c3bdf2a-etc-cni-netd\") pod \"cilium-ck9hw\" (UID: \"22b20782-77b0-43d9-a5ec-de471c3bdf2a\") " pod="kube-system/cilium-ck9hw" Sep 16 04:27:02.551769 kubelet[2759]: I0916 04:27:02.551734 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/22b20782-77b0-43d9-a5ec-de471c3bdf2a-cilium-config-path\") pod \"cilium-ck9hw\" (UID: \"22b20782-77b0-43d9-a5ec-de471c3bdf2a\") " pod="kube-system/cilium-ck9hw" Sep 16 04:27:02.551984 kubelet[2759]: I0916 04:27:02.551939 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zg7h\" (UniqueName: \"kubernetes.io/projected/7cf423e6-0cad-4034-adae-a457d1c5fdcf-kube-api-access-6zg7h\") pod \"kube-proxy-ptb9p\" (UID: \"7cf423e6-0cad-4034-adae-a457d1c5fdcf\") " pod="kube-system/kube-proxy-ptb9p" Sep 16 04:27:02.552062 kubelet[2759]: I0916 04:27:02.552050 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/22b20782-77b0-43d9-a5ec-de471c3bdf2a-cilium-run\") pod \"cilium-ck9hw\" (UID: \"22b20782-77b0-43d9-a5ec-de471c3bdf2a\") " pod="kube-system/cilium-ck9hw" Sep 16 04:27:02.552182 kubelet[2759]: I0916 04:27:02.552170 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/22b20782-77b0-43d9-a5ec-de471c3bdf2a-bpf-maps\") pod \"cilium-ck9hw\" (UID: \"22b20782-77b0-43d9-a5ec-de471c3bdf2a\") " pod="kube-system/cilium-ck9hw" Sep 16 04:27:02.552319 kubelet[2759]: I0916 04:27:02.552296 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22b20782-77b0-43d9-a5ec-de471c3bdf2a-xtables-lock\") pod \"cilium-ck9hw\" (UID: \"22b20782-77b0-43d9-a5ec-de471c3bdf2a\") " pod="kube-system/cilium-ck9hw" Sep 16 04:27:02.552438 kubelet[2759]: I0916 04:27:02.552384 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/22b20782-77b0-43d9-a5ec-de471c3bdf2a-host-proc-sys-net\") pod \"cilium-ck9hw\" (UID: \"22b20782-77b0-43d9-a5ec-de471c3bdf2a\") " pod="kube-system/cilium-ck9hw" Sep 16 04:27:02.552438 kubelet[2759]: I0916 04:27:02.552407 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/22b20782-77b0-43d9-a5ec-de471c3bdf2a-hubble-tls\") pod \"cilium-ck9hw\" (UID: \"22b20782-77b0-43d9-a5ec-de471c3bdf2a\") " pod="kube-system/cilium-ck9hw" Sep 16 04:27:02.552438 kubelet[2759]: I0916 04:27:02.552424 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7cf423e6-0cad-4034-adae-a457d1c5fdcf-lib-modules\") pod \"kube-proxy-ptb9p\" (UID: \"7cf423e6-0cad-4034-adae-a457d1c5fdcf\") " pod="kube-system/kube-proxy-ptb9p" Sep 16 04:27:02.552575 kubelet[2759]: I0916 04:27:02.552555 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/22b20782-77b0-43d9-a5ec-de471c3bdf2a-hostproc\") pod \"cilium-ck9hw\" (UID: \"22b20782-77b0-43d9-a5ec-de471c3bdf2a\") " pod="kube-system/cilium-ck9hw" Sep 16 04:27:02.552761 kubelet[2759]: I0916 04:27:02.552677 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/22b20782-77b0-43d9-a5ec-de471c3bdf2a-clustermesh-secrets\") pod \"cilium-ck9hw\" (UID: \"22b20782-77b0-43d9-a5ec-de471c3bdf2a\") " pod="kube-system/cilium-ck9hw" Sep 16 04:27:02.552916 kubelet[2759]: I0916 04:27:02.552856 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/22b20782-77b0-43d9-a5ec-de471c3bdf2a-host-proc-sys-kernel\") pod \"cilium-ck9hw\" (UID: \"22b20782-77b0-43d9-a5ec-de471c3bdf2a\") " pod="kube-system/cilium-ck9hw" Sep 16 04:27:02.669771 systemd[1]: Created slice kubepods-besteffort-pode641ff89_8106_4546_9ca0_351528c24d00.slice - libcontainer container kubepods-besteffort-pode641ff89_8106_4546_9ca0_351528c24d00.slice. Sep 16 04:27:02.752110 containerd[1543]: time="2025-09-16T04:27:02.752022720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ptb9p,Uid:7cf423e6-0cad-4034-adae-a457d1c5fdcf,Namespace:kube-system,Attempt:0,}" Sep 16 04:27:02.755410 kubelet[2759]: I0916 04:27:02.755329 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e641ff89-8106-4546-9ca0-351528c24d00-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-xrvb2\" (UID: \"e641ff89-8106-4546-9ca0-351528c24d00\") " pod="kube-system/cilium-operator-6c4d7847fc-xrvb2" Sep 16 04:27:02.755981 kubelet[2759]: I0916 04:27:02.755901 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4n2t\" (UniqueName: \"kubernetes.io/projected/e641ff89-8106-4546-9ca0-351528c24d00-kube-api-access-t4n2t\") pod \"cilium-operator-6c4d7847fc-xrvb2\" (UID: \"e641ff89-8106-4546-9ca0-351528c24d00\") " pod="kube-system/cilium-operator-6c4d7847fc-xrvb2" Sep 16 04:27:02.777021 containerd[1543]: time="2025-09-16T04:27:02.776948006Z" level=info msg="connecting to shim 3e1e3cf3d4d0cd14eabb2afecf58884d71ce18ce865d66b4ff04ea31e1e382ee" address="unix:///run/containerd/s/5478291a76db79fe5ec53b983092ecc0d0fa3898431c6d5c2384bbcf96bf2492" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:27:02.807110 systemd[1]: Started cri-containerd-3e1e3cf3d4d0cd14eabb2afecf58884d71ce18ce865d66b4ff04ea31e1e382ee.scope - libcontainer container 3e1e3cf3d4d0cd14eabb2afecf58884d71ce18ce865d66b4ff04ea31e1e382ee. Sep 16 04:27:02.844155 containerd[1543]: time="2025-09-16T04:27:02.844087114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ptb9p,Uid:7cf423e6-0cad-4034-adae-a457d1c5fdcf,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e1e3cf3d4d0cd14eabb2afecf58884d71ce18ce865d66b4ff04ea31e1e382ee\"" Sep 16 04:27:02.849088 containerd[1543]: time="2025-09-16T04:27:02.849053291Z" level=info msg="CreateContainer within sandbox \"3e1e3cf3d4d0cd14eabb2afecf58884d71ce18ce865d66b4ff04ea31e1e382ee\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 16 04:27:02.864675 containerd[1543]: time="2025-09-16T04:27:02.864546313Z" level=info msg="Container 1c43f741bfe8803ff2d749603e40d9db37e702fbea9669147463367646ee08fd: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:27:02.875574 containerd[1543]: time="2025-09-16T04:27:02.875470366Z" level=info msg="CreateContainer within sandbox \"3e1e3cf3d4d0cd14eabb2afecf58884d71ce18ce865d66b4ff04ea31e1e382ee\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1c43f741bfe8803ff2d749603e40d9db37e702fbea9669147463367646ee08fd\"" Sep 16 04:27:02.876779 containerd[1543]: time="2025-09-16T04:27:02.876647989Z" level=info msg="StartContainer for \"1c43f741bfe8803ff2d749603e40d9db37e702fbea9669147463367646ee08fd\"" Sep 16 04:27:02.878542 containerd[1543]: time="2025-09-16T04:27:02.878490545Z" level=info msg="connecting to shim 1c43f741bfe8803ff2d749603e40d9db37e702fbea9669147463367646ee08fd" address="unix:///run/containerd/s/5478291a76db79fe5ec53b983092ecc0d0fa3898431c6d5c2384bbcf96bf2492" protocol=ttrpc version=3 Sep 16 04:27:02.904201 systemd[1]: Started cri-containerd-1c43f741bfe8803ff2d749603e40d9db37e702fbea9669147463367646ee08fd.scope - libcontainer container 1c43f741bfe8803ff2d749603e40d9db37e702fbea9669147463367646ee08fd. Sep 16 04:27:02.960999 containerd[1543]: time="2025-09-16T04:27:02.960948392Z" level=info msg="StartContainer for \"1c43f741bfe8803ff2d749603e40d9db37e702fbea9669147463367646ee08fd\" returns successfully" Sep 16 04:27:03.399783 kubelet[2759]: I0916 04:27:03.398624 2759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ptb9p" podStartSLOduration=1.3986037009999999 podStartE2EDuration="1.398603701s" podCreationTimestamp="2025-09-16 04:27:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:27:03.398327695 +0000 UTC m=+8.217445752" watchObservedRunningTime="2025-09-16 04:27:03.398603701 +0000 UTC m=+8.217721758" Sep 16 04:27:03.655893 kubelet[2759]: E0916 04:27:03.655338 2759 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Sep 16 04:27:03.655893 kubelet[2759]: E0916 04:27:03.655467 2759 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22b20782-77b0-43d9-a5ec-de471c3bdf2a-clustermesh-secrets podName:22b20782-77b0-43d9-a5ec-de471c3bdf2a nodeName:}" failed. No retries permitted until 2025-09-16 04:27:04.155434041 +0000 UTC m=+8.974552138 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/22b20782-77b0-43d9-a5ec-de471c3bdf2a-clustermesh-secrets") pod "cilium-ck9hw" (UID: "22b20782-77b0-43d9-a5ec-de471c3bdf2a") : failed to sync secret cache: timed out waiting for the condition Sep 16 04:27:03.655893 kubelet[2759]: E0916 04:27:03.655505 2759 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Sep 16 04:27:03.655893 kubelet[2759]: E0916 04:27:03.655550 2759 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/22b20782-77b0-43d9-a5ec-de471c3bdf2a-cilium-config-path podName:22b20782-77b0-43d9-a5ec-de471c3bdf2a nodeName:}" failed. No retries permitted until 2025-09-16 04:27:04.155536083 +0000 UTC m=+8.974654180 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/22b20782-77b0-43d9-a5ec-de471c3bdf2a-cilium-config-path") pod "cilium-ck9hw" (UID: "22b20782-77b0-43d9-a5ec-de471c3bdf2a") : failed to sync configmap cache: timed out waiting for the condition Sep 16 04:27:03.687497 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2017191281.mount: Deactivated successfully. Sep 16 04:27:03.874430 containerd[1543]: time="2025-09-16T04:27:03.874377853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xrvb2,Uid:e641ff89-8106-4546-9ca0-351528c24d00,Namespace:kube-system,Attempt:0,}" Sep 16 04:27:03.897951 containerd[1543]: time="2025-09-16T04:27:03.897908266Z" level=info msg="connecting to shim 4791e40dca24d0b745a47ee23bf4e9c91c20d768ee592161dcc135ab82e3d4c8" address="unix:///run/containerd/s/3964493ba6622abfe41d932c8f43813d961255881b52b451e4999e807ac3a4b7" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:27:03.920984 systemd[1]: Started cri-containerd-4791e40dca24d0b745a47ee23bf4e9c91c20d768ee592161dcc135ab82e3d4c8.scope - libcontainer container 4791e40dca24d0b745a47ee23bf4e9c91c20d768ee592161dcc135ab82e3d4c8. Sep 16 04:27:03.962135 containerd[1543]: time="2025-09-16T04:27:03.962053340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xrvb2,Uid:e641ff89-8106-4546-9ca0-351528c24d00,Namespace:kube-system,Attempt:0,} returns sandbox id \"4791e40dca24d0b745a47ee23bf4e9c91c20d768ee592161dcc135ab82e3d4c8\"" Sep 16 04:27:03.965851 containerd[1543]: time="2025-09-16T04:27:03.965803412Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 16 04:27:04.264049 containerd[1543]: time="2025-09-16T04:27:04.263990286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ck9hw,Uid:22b20782-77b0-43d9-a5ec-de471c3bdf2a,Namespace:kube-system,Attempt:0,}" Sep 16 04:27:04.295090 containerd[1543]: time="2025-09-16T04:27:04.294974875Z" level=info msg="connecting to shim 6542c9d2c0678b2c4a55eb0926bec8cede83e45cd96ef0d523b2921c2c3d9062" address="unix:///run/containerd/s/8310a8377c04cc1314e0d0eb17d9a20ed9097371e9d9335f2ed6c8d19243f737" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:27:04.319082 systemd[1]: Started cri-containerd-6542c9d2c0678b2c4a55eb0926bec8cede83e45cd96ef0d523b2921c2c3d9062.scope - libcontainer container 6542c9d2c0678b2c4a55eb0926bec8cede83e45cd96ef0d523b2921c2c3d9062. Sep 16 04:27:04.349447 containerd[1543]: time="2025-09-16T04:27:04.349397069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ck9hw,Uid:22b20782-77b0-43d9-a5ec-de471c3bdf2a,Namespace:kube-system,Attempt:0,} returns sandbox id \"6542c9d2c0678b2c4a55eb0926bec8cede83e45cd96ef0d523b2921c2c3d9062\"" Sep 16 04:27:06.727185 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3635324832.mount: Deactivated successfully. Sep 16 04:27:11.843862 containerd[1543]: time="2025-09-16T04:27:11.843662969Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:27:11.845831 containerd[1543]: time="2025-09-16T04:27:11.845749286Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 16 04:27:11.846464 containerd[1543]: time="2025-09-16T04:27:11.846385257Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:27:11.848203 containerd[1543]: time="2025-09-16T04:27:11.848156689Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 7.882136712s" Sep 16 04:27:11.848203 containerd[1543]: time="2025-09-16T04:27:11.848192449Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 16 04:27:11.850633 containerd[1543]: time="2025-09-16T04:27:11.849490352Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 16 04:27:11.853553 containerd[1543]: time="2025-09-16T04:27:11.853325580Z" level=info msg="CreateContainer within sandbox \"4791e40dca24d0b745a47ee23bf4e9c91c20d768ee592161dcc135ab82e3d4c8\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 16 04:27:11.865790 containerd[1543]: time="2025-09-16T04:27:11.864556499Z" level=info msg="Container 8918cd11c025fa3b7e0483898c1d829046c5938140e48874405f2b8f3784f0f8: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:27:11.875902 containerd[1543]: time="2025-09-16T04:27:11.875829059Z" level=info msg="CreateContainer within sandbox \"4791e40dca24d0b745a47ee23bf4e9c91c20d768ee592161dcc135ab82e3d4c8\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8918cd11c025fa3b7e0483898c1d829046c5938140e48874405f2b8f3784f0f8\"" Sep 16 04:27:11.876716 containerd[1543]: time="2025-09-16T04:27:11.876644313Z" level=info msg="StartContainer for \"8918cd11c025fa3b7e0483898c1d829046c5938140e48874405f2b8f3784f0f8\"" Sep 16 04:27:11.879029 containerd[1543]: time="2025-09-16T04:27:11.878984235Z" level=info msg="connecting to shim 8918cd11c025fa3b7e0483898c1d829046c5938140e48874405f2b8f3784f0f8" address="unix:///run/containerd/s/3964493ba6622abfe41d932c8f43813d961255881b52b451e4999e807ac3a4b7" protocol=ttrpc version=3 Sep 16 04:27:11.907081 systemd[1]: Started cri-containerd-8918cd11c025fa3b7e0483898c1d829046c5938140e48874405f2b8f3784f0f8.scope - libcontainer container 8918cd11c025fa3b7e0483898c1d829046c5938140e48874405f2b8f3784f0f8. Sep 16 04:27:11.941836 containerd[1543]: time="2025-09-16T04:27:11.941781987Z" level=info msg="StartContainer for \"8918cd11c025fa3b7e0483898c1d829046c5938140e48874405f2b8f3784f0f8\" returns successfully" Sep 16 04:27:12.438741 kubelet[2759]: I0916 04:27:12.438505 2759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-xrvb2" podStartSLOduration=2.553101308 podStartE2EDuration="10.438490881s" podCreationTimestamp="2025-09-16 04:27:02 +0000 UTC" firstStartedPulling="2025-09-16 04:27:03.963825774 +0000 UTC m=+8.782943831" lastFinishedPulling="2025-09-16 04:27:11.849215227 +0000 UTC m=+16.668333404" observedRunningTime="2025-09-16 04:27:12.438141915 +0000 UTC m=+17.257259972" watchObservedRunningTime="2025-09-16 04:27:12.438490881 +0000 UTC m=+17.257608938" Sep 16 04:27:25.358831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount103984114.mount: Deactivated successfully. Sep 16 04:27:29.407497 containerd[1543]: time="2025-09-16T04:27:29.407419143Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:27:29.408807 containerd[1543]: time="2025-09-16T04:27:29.408728954Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 16 04:27:29.409678 containerd[1543]: time="2025-09-16T04:27:29.409633081Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:27:29.412276 containerd[1543]: time="2025-09-16T04:27:29.412229661Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 17.562612667s" Sep 16 04:27:29.412440 containerd[1543]: time="2025-09-16T04:27:29.412363302Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 16 04:27:29.414982 containerd[1543]: time="2025-09-16T04:27:29.414936522Z" level=info msg="CreateContainer within sandbox \"6542c9d2c0678b2c4a55eb0926bec8cede83e45cd96ef0d523b2921c2c3d9062\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 16 04:27:29.425309 containerd[1543]: time="2025-09-16T04:27:29.422701143Z" level=info msg="Container 26c3ad314465a7e82e58bd9e4be70bda75dc8a216ce45af86f307ad8518dc083: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:27:29.431182 containerd[1543]: time="2025-09-16T04:27:29.431137809Z" level=info msg="CreateContainer within sandbox \"6542c9d2c0678b2c4a55eb0926bec8cede83e45cd96ef0d523b2921c2c3d9062\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"26c3ad314465a7e82e58bd9e4be70bda75dc8a216ce45af86f307ad8518dc083\"" Sep 16 04:27:29.432109 containerd[1543]: time="2025-09-16T04:27:29.432074056Z" level=info msg="StartContainer for \"26c3ad314465a7e82e58bd9e4be70bda75dc8a216ce45af86f307ad8518dc083\"" Sep 16 04:27:29.434734 containerd[1543]: time="2025-09-16T04:27:29.434705157Z" level=info msg="connecting to shim 26c3ad314465a7e82e58bd9e4be70bda75dc8a216ce45af86f307ad8518dc083" address="unix:///run/containerd/s/8310a8377c04cc1314e0d0eb17d9a20ed9097371e9d9335f2ed6c8d19243f737" protocol=ttrpc version=3 Sep 16 04:27:29.458097 systemd[1]: Started cri-containerd-26c3ad314465a7e82e58bd9e4be70bda75dc8a216ce45af86f307ad8518dc083.scope - libcontainer container 26c3ad314465a7e82e58bd9e4be70bda75dc8a216ce45af86f307ad8518dc083. Sep 16 04:27:29.498270 containerd[1543]: time="2025-09-16T04:27:29.498220215Z" level=info msg="StartContainer for \"26c3ad314465a7e82e58bd9e4be70bda75dc8a216ce45af86f307ad8518dc083\" returns successfully" Sep 16 04:27:29.515667 systemd[1]: cri-containerd-26c3ad314465a7e82e58bd9e4be70bda75dc8a216ce45af86f307ad8518dc083.scope: Deactivated successfully. Sep 16 04:27:29.519637 containerd[1543]: time="2025-09-16T04:27:29.517655087Z" level=info msg="received exit event container_id:\"26c3ad314465a7e82e58bd9e4be70bda75dc8a216ce45af86f307ad8518dc083\" id:\"26c3ad314465a7e82e58bd9e4be70bda75dc8a216ce45af86f307ad8518dc083\" pid:3220 exited_at:{seconds:1757996849 nanos:515878633}" Sep 16 04:27:29.519637 containerd[1543]: time="2025-09-16T04:27:29.518147531Z" level=info msg="TaskExit event in podsandbox handler container_id:\"26c3ad314465a7e82e58bd9e4be70bda75dc8a216ce45af86f307ad8518dc083\" id:\"26c3ad314465a7e82e58bd9e4be70bda75dc8a216ce45af86f307ad8518dc083\" pid:3220 exited_at:{seconds:1757996849 nanos:515878633}" Sep 16 04:27:29.548303 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26c3ad314465a7e82e58bd9e4be70bda75dc8a216ce45af86f307ad8518dc083-rootfs.mount: Deactivated successfully. Sep 16 04:27:30.467875 containerd[1543]: time="2025-09-16T04:27:30.467793982Z" level=info msg="CreateContainer within sandbox \"6542c9d2c0678b2c4a55eb0926bec8cede83e45cd96ef0d523b2921c2c3d9062\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 16 04:27:30.488953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2369218369.mount: Deactivated successfully. Sep 16 04:27:30.492573 containerd[1543]: time="2025-09-16T04:27:30.492338299Z" level=info msg="Container 8eb60dd0af894d371d1d55cec279aaab375ada63989f51c128e6a50646d14442: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:27:30.506392 containerd[1543]: time="2025-09-16T04:27:30.506350891Z" level=info msg="CreateContainer within sandbox \"6542c9d2c0678b2c4a55eb0926bec8cede83e45cd96ef0d523b2921c2c3d9062\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8eb60dd0af894d371d1d55cec279aaab375ada63989f51c128e6a50646d14442\"" Sep 16 04:27:30.508344 containerd[1543]: time="2025-09-16T04:27:30.507497980Z" level=info msg="StartContainer for \"8eb60dd0af894d371d1d55cec279aaab375ada63989f51c128e6a50646d14442\"" Sep 16 04:27:30.508798 containerd[1543]: time="2025-09-16T04:27:30.508761791Z" level=info msg="connecting to shim 8eb60dd0af894d371d1d55cec279aaab375ada63989f51c128e6a50646d14442" address="unix:///run/containerd/s/8310a8377c04cc1314e0d0eb17d9a20ed9097371e9d9335f2ed6c8d19243f737" protocol=ttrpc version=3 Sep 16 04:27:30.540985 systemd[1]: Started cri-containerd-8eb60dd0af894d371d1d55cec279aaab375ada63989f51c128e6a50646d14442.scope - libcontainer container 8eb60dd0af894d371d1d55cec279aaab375ada63989f51c128e6a50646d14442. Sep 16 04:27:30.576486 containerd[1543]: time="2025-09-16T04:27:30.576437134Z" level=info msg="StartContainer for \"8eb60dd0af894d371d1d55cec279aaab375ada63989f51c128e6a50646d14442\" returns successfully" Sep 16 04:27:30.596261 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 16 04:27:30.596512 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:27:30.598835 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 16 04:27:30.602405 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 04:27:30.603224 containerd[1543]: time="2025-09-16T04:27:30.603129828Z" level=info msg="received exit event container_id:\"8eb60dd0af894d371d1d55cec279aaab375ada63989f51c128e6a50646d14442\" id:\"8eb60dd0af894d371d1d55cec279aaab375ada63989f51c128e6a50646d14442\" pid:3267 exited_at:{seconds:1757996850 nanos:601450495}" Sep 16 04:27:30.606827 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 16 04:27:30.607584 systemd[1]: cri-containerd-8eb60dd0af894d371d1d55cec279aaab375ada63989f51c128e6a50646d14442.scope: Deactivated successfully. Sep 16 04:27:30.608023 containerd[1543]: time="2025-09-16T04:27:30.607154101Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8eb60dd0af894d371d1d55cec279aaab375ada63989f51c128e6a50646d14442\" id:\"8eb60dd0af894d371d1d55cec279aaab375ada63989f51c128e6a50646d14442\" pid:3267 exited_at:{seconds:1757996850 nanos:601450495}" Sep 16 04:27:30.645719 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:27:31.479993 containerd[1543]: time="2025-09-16T04:27:31.479914361Z" level=info msg="CreateContainer within sandbox \"6542c9d2c0678b2c4a55eb0926bec8cede83e45cd96ef0d523b2921c2c3d9062\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 16 04:27:31.485094 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8eb60dd0af894d371d1d55cec279aaab375ada63989f51c128e6a50646d14442-rootfs.mount: Deactivated successfully. Sep 16 04:27:31.497810 containerd[1543]: time="2025-09-16T04:27:31.495743972Z" level=info msg="Container c7a2c37e12dedb3285087599d3770c2d0b64bbc0944a934c1e3dade51f085ea8: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:27:31.511126 containerd[1543]: time="2025-09-16T04:27:31.511080138Z" level=info msg="CreateContainer within sandbox \"6542c9d2c0678b2c4a55eb0926bec8cede83e45cd96ef0d523b2921c2c3d9062\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c7a2c37e12dedb3285087599d3770c2d0b64bbc0944a934c1e3dade51f085ea8\"" Sep 16 04:27:31.512056 containerd[1543]: time="2025-09-16T04:27:31.512014465Z" level=info msg="StartContainer for \"c7a2c37e12dedb3285087599d3770c2d0b64bbc0944a934c1e3dade51f085ea8\"" Sep 16 04:27:31.514888 containerd[1543]: time="2025-09-16T04:27:31.514749568Z" level=info msg="connecting to shim c7a2c37e12dedb3285087599d3770c2d0b64bbc0944a934c1e3dade51f085ea8" address="unix:///run/containerd/s/8310a8377c04cc1314e0d0eb17d9a20ed9097371e9d9335f2ed6c8d19243f737" protocol=ttrpc version=3 Sep 16 04:27:31.549190 systemd[1]: Started cri-containerd-c7a2c37e12dedb3285087599d3770c2d0b64bbc0944a934c1e3dade51f085ea8.scope - libcontainer container c7a2c37e12dedb3285087599d3770c2d0b64bbc0944a934c1e3dade51f085ea8. Sep 16 04:27:31.606473 containerd[1543]: time="2025-09-16T04:27:31.606411162Z" level=info msg="StartContainer for \"c7a2c37e12dedb3285087599d3770c2d0b64bbc0944a934c1e3dade51f085ea8\" returns successfully" Sep 16 04:27:31.612087 systemd[1]: cri-containerd-c7a2c37e12dedb3285087599d3770c2d0b64bbc0944a934c1e3dade51f085ea8.scope: Deactivated successfully. Sep 16 04:27:31.616483 containerd[1543]: time="2025-09-16T04:27:31.616384324Z" level=info msg="received exit event container_id:\"c7a2c37e12dedb3285087599d3770c2d0b64bbc0944a934c1e3dade51f085ea8\" id:\"c7a2c37e12dedb3285087599d3770c2d0b64bbc0944a934c1e3dade51f085ea8\" pid:3314 exited_at:{seconds:1757996851 nanos:615680678}" Sep 16 04:27:31.617623 containerd[1543]: time="2025-09-16T04:27:31.617575493Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c7a2c37e12dedb3285087599d3770c2d0b64bbc0944a934c1e3dade51f085ea8\" id:\"c7a2c37e12dedb3285087599d3770c2d0b64bbc0944a934c1e3dade51f085ea8\" pid:3314 exited_at:{seconds:1757996851 nanos:615680678}" Sep 16 04:27:31.642984 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c7a2c37e12dedb3285087599d3770c2d0b64bbc0944a934c1e3dade51f085ea8-rootfs.mount: Deactivated successfully. Sep 16 04:27:32.486028 containerd[1543]: time="2025-09-16T04:27:32.485976404Z" level=info msg="CreateContainer within sandbox \"6542c9d2c0678b2c4a55eb0926bec8cede83e45cd96ef0d523b2921c2c3d9062\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 16 04:27:32.496339 containerd[1543]: time="2025-09-16T04:27:32.496300691Z" level=info msg="Container d1dd58914350a2f858c9fd1cf767c251b0edc2e94d63ce9e9bbb2b0fe07d95c9: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:27:32.508788 containerd[1543]: time="2025-09-16T04:27:32.507375624Z" level=info msg="CreateContainer within sandbox \"6542c9d2c0678b2c4a55eb0926bec8cede83e45cd96ef0d523b2921c2c3d9062\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d1dd58914350a2f858c9fd1cf767c251b0edc2e94d63ce9e9bbb2b0fe07d95c9\"" Sep 16 04:27:32.510008 containerd[1543]: time="2025-09-16T04:27:32.509894206Z" level=info msg="StartContainer for \"d1dd58914350a2f858c9fd1cf767c251b0edc2e94d63ce9e9bbb2b0fe07d95c9\"" Sep 16 04:27:32.511955 containerd[1543]: time="2025-09-16T04:27:32.511926343Z" level=info msg="connecting to shim d1dd58914350a2f858c9fd1cf767c251b0edc2e94d63ce9e9bbb2b0fe07d95c9" address="unix:///run/containerd/s/8310a8377c04cc1314e0d0eb17d9a20ed9097371e9d9335f2ed6c8d19243f737" protocol=ttrpc version=3 Sep 16 04:27:32.540956 systemd[1]: Started cri-containerd-d1dd58914350a2f858c9fd1cf767c251b0edc2e94d63ce9e9bbb2b0fe07d95c9.scope - libcontainer container d1dd58914350a2f858c9fd1cf767c251b0edc2e94d63ce9e9bbb2b0fe07d95c9. Sep 16 04:27:32.569654 systemd[1]: cri-containerd-d1dd58914350a2f858c9fd1cf767c251b0edc2e94d63ce9e9bbb2b0fe07d95c9.scope: Deactivated successfully. Sep 16 04:27:32.572118 containerd[1543]: time="2025-09-16T04:27:32.571331962Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1dd58914350a2f858c9fd1cf767c251b0edc2e94d63ce9e9bbb2b0fe07d95c9\" id:\"d1dd58914350a2f858c9fd1cf767c251b0edc2e94d63ce9e9bbb2b0fe07d95c9\" pid:3353 exited_at:{seconds:1757996852 nanos:570941319}" Sep 16 04:27:32.574217 containerd[1543]: time="2025-09-16T04:27:32.573634982Z" level=info msg="received exit event container_id:\"d1dd58914350a2f858c9fd1cf767c251b0edc2e94d63ce9e9bbb2b0fe07d95c9\" id:\"d1dd58914350a2f858c9fd1cf767c251b0edc2e94d63ce9e9bbb2b0fe07d95c9\" pid:3353 exited_at:{seconds:1757996852 nanos:570941319}" Sep 16 04:27:32.579667 containerd[1543]: time="2025-09-16T04:27:32.579635312Z" level=info msg="StartContainer for \"d1dd58914350a2f858c9fd1cf767c251b0edc2e94d63ce9e9bbb2b0fe07d95c9\" returns successfully" Sep 16 04:27:32.599594 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d1dd58914350a2f858c9fd1cf767c251b0edc2e94d63ce9e9bbb2b0fe07d95c9-rootfs.mount: Deactivated successfully. Sep 16 04:27:33.493134 containerd[1543]: time="2025-09-16T04:27:33.493082681Z" level=info msg="CreateContainer within sandbox \"6542c9d2c0678b2c4a55eb0926bec8cede83e45cd96ef0d523b2921c2c3d9062\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 16 04:27:33.512857 containerd[1543]: time="2025-09-16T04:27:33.511597520Z" level=info msg="Container d7c2f361ed818f4929a50c5b3b1f2163971d0bf08e2168cfdb55b3bfc274fc1d: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:27:33.519235 containerd[1543]: time="2025-09-16T04:27:33.519193705Z" level=info msg="CreateContainer within sandbox \"6542c9d2c0678b2c4a55eb0926bec8cede83e45cd96ef0d523b2921c2c3d9062\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d7c2f361ed818f4929a50c5b3b1f2163971d0bf08e2168cfdb55b3bfc274fc1d\"" Sep 16 04:27:33.521187 containerd[1543]: time="2025-09-16T04:27:33.521161562Z" level=info msg="StartContainer for \"d7c2f361ed818f4929a50c5b3b1f2163971d0bf08e2168cfdb55b3bfc274fc1d\"" Sep 16 04:27:33.524069 containerd[1543]: time="2025-09-16T04:27:33.524043507Z" level=info msg="connecting to shim d7c2f361ed818f4929a50c5b3b1f2163971d0bf08e2168cfdb55b3bfc274fc1d" address="unix:///run/containerd/s/8310a8377c04cc1314e0d0eb17d9a20ed9097371e9d9335f2ed6c8d19243f737" protocol=ttrpc version=3 Sep 16 04:27:33.565999 systemd[1]: Started cri-containerd-d7c2f361ed818f4929a50c5b3b1f2163971d0bf08e2168cfdb55b3bfc274fc1d.scope - libcontainer container d7c2f361ed818f4929a50c5b3b1f2163971d0bf08e2168cfdb55b3bfc274fc1d. Sep 16 04:27:33.629900 containerd[1543]: time="2025-09-16T04:27:33.629679894Z" level=info msg="StartContainer for \"d7c2f361ed818f4929a50c5b3b1f2163971d0bf08e2168cfdb55b3bfc274fc1d\" returns successfully" Sep 16 04:27:33.702351 containerd[1543]: time="2025-09-16T04:27:33.702282918Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d7c2f361ed818f4929a50c5b3b1f2163971d0bf08e2168cfdb55b3bfc274fc1d\" id:\"03145fefdc2a05dd842adf4bcc1d705c83c70a44f3ebd291f7d0c74d9b4bb09c\" pid:3425 exited_at:{seconds:1757996853 nanos:701940635}" Sep 16 04:27:33.773183 kubelet[2759]: I0916 04:27:33.773087 2759 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 16 04:27:33.815422 systemd[1]: Created slice kubepods-burstable-pod786e627c_4fad_4f3c_b296_cd9852333c74.slice - libcontainer container kubepods-burstable-pod786e627c_4fad_4f3c_b296_cd9852333c74.slice. Sep 16 04:27:33.823309 systemd[1]: Created slice kubepods-burstable-pod3d36bfd1_a8fe_48fb_8e64_b83a960860b0.slice - libcontainer container kubepods-burstable-pod3d36bfd1_a8fe_48fb_8e64_b83a960860b0.slice. Sep 16 04:27:33.883771 kubelet[2759]: I0916 04:27:33.882782 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf8xw\" (UniqueName: \"kubernetes.io/projected/3d36bfd1-a8fe-48fb-8e64-b83a960860b0-kube-api-access-hf8xw\") pod \"coredns-668d6bf9bc-mv2gp\" (UID: \"3d36bfd1-a8fe-48fb-8e64-b83a960860b0\") " pod="kube-system/coredns-668d6bf9bc-mv2gp" Sep 16 04:27:33.884035 kubelet[2759]: I0916 04:27:33.884014 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/786e627c-4fad-4f3c-b296-cd9852333c74-config-volume\") pod \"coredns-668d6bf9bc-xfn8t\" (UID: \"786e627c-4fad-4f3c-b296-cd9852333c74\") " pod="kube-system/coredns-668d6bf9bc-xfn8t" Sep 16 04:27:33.884184 kubelet[2759]: I0916 04:27:33.884104 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d36bfd1-a8fe-48fb-8e64-b83a960860b0-config-volume\") pod \"coredns-668d6bf9bc-mv2gp\" (UID: \"3d36bfd1-a8fe-48fb-8e64-b83a960860b0\") " pod="kube-system/coredns-668d6bf9bc-mv2gp" Sep 16 04:27:33.884231 kubelet[2759]: I0916 04:27:33.884139 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88wlq\" (UniqueName: \"kubernetes.io/projected/786e627c-4fad-4f3c-b296-cd9852333c74-kube-api-access-88wlq\") pod \"coredns-668d6bf9bc-xfn8t\" (UID: \"786e627c-4fad-4f3c-b296-cd9852333c74\") " pod="kube-system/coredns-668d6bf9bc-xfn8t" Sep 16 04:27:34.121882 containerd[1543]: time="2025-09-16T04:27:34.121469699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xfn8t,Uid:786e627c-4fad-4f3c-b296-cd9852333c74,Namespace:kube-system,Attempt:0,}" Sep 16 04:27:34.131122 containerd[1543]: time="2025-09-16T04:27:34.131074064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mv2gp,Uid:3d36bfd1-a8fe-48fb-8e64-b83a960860b0,Namespace:kube-system,Attempt:0,}" Sep 16 04:27:34.529325 kubelet[2759]: I0916 04:27:34.529192 2759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ck9hw" podStartSLOduration=7.4667101 podStartE2EDuration="32.529174512s" podCreationTimestamp="2025-09-16 04:27:02 +0000 UTC" firstStartedPulling="2025-09-16 04:27:04.350949938 +0000 UTC m=+9.170067995" lastFinishedPulling="2025-09-16 04:27:29.41341435 +0000 UTC m=+34.232532407" observedRunningTime="2025-09-16 04:27:34.527017453 +0000 UTC m=+39.346135590" watchObservedRunningTime="2025-09-16 04:27:34.529174512 +0000 UTC m=+39.348292569" Sep 16 04:27:35.818927 systemd-networkd[1409]: cilium_host: Link UP Sep 16 04:27:35.819496 systemd-networkd[1409]: cilium_net: Link UP Sep 16 04:27:35.819657 systemd-networkd[1409]: cilium_net: Gained carrier Sep 16 04:27:35.821106 systemd-networkd[1409]: cilium_host: Gained carrier Sep 16 04:27:35.933079 systemd-networkd[1409]: cilium_vxlan: Link UP Sep 16 04:27:35.933097 systemd-networkd[1409]: cilium_vxlan: Gained carrier Sep 16 04:27:36.232919 kernel: NET: Registered PF_ALG protocol family Sep 16 04:27:36.635028 systemd-networkd[1409]: cilium_net: Gained IPv6LL Sep 16 04:27:36.698943 systemd-networkd[1409]: cilium_host: Gained IPv6LL Sep 16 04:27:36.930856 systemd-networkd[1409]: lxc_health: Link UP Sep 16 04:27:36.931172 systemd-networkd[1409]: lxc_health: Gained carrier Sep 16 04:27:37.178138 kernel: eth0: renamed from tmpc2fcd Sep 16 04:27:37.178544 systemd-networkd[1409]: lxc14a233c3529b: Link UP Sep 16 04:27:37.180726 systemd-networkd[1409]: lxc14a233c3529b: Gained carrier Sep 16 04:27:37.210795 systemd-networkd[1409]: lxc5ade92bf7f36: Link UP Sep 16 04:27:37.213780 kernel: eth0: renamed from tmp02b6a Sep 16 04:27:37.217870 systemd-networkd[1409]: lxc5ade92bf7f36: Gained carrier Sep 16 04:27:37.723959 systemd-networkd[1409]: cilium_vxlan: Gained IPv6LL Sep 16 04:27:38.364253 systemd-networkd[1409]: lxc_health: Gained IPv6LL Sep 16 04:27:38.427011 systemd-networkd[1409]: lxc5ade92bf7f36: Gained IPv6LL Sep 16 04:27:38.491240 systemd-networkd[1409]: lxc14a233c3529b: Gained IPv6LL Sep 16 04:27:41.221612 containerd[1543]: time="2025-09-16T04:27:41.221519449Z" level=info msg="connecting to shim c2fcdb64adb78db969eca27849850091a2b8e60dffcad7354c929402df15e05f" address="unix:///run/containerd/s/b374f3af720f1528bb67f00c8e719dd1e17e70ea48d06cfda2fcb6c49b6ac4c1" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:27:41.245026 containerd[1543]: time="2025-09-16T04:27:41.243976350Z" level=info msg="connecting to shim 02b6a6665d7fe6657f9da84a3329b530286d945ba4b960ac06c3844a8f856a00" address="unix:///run/containerd/s/c94ef4ffc9973d3b4d6a83b80f4d6e1dfd7b4af0880e22ec13e71659f5f18853" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:27:41.270012 systemd[1]: Started cri-containerd-c2fcdb64adb78db969eca27849850091a2b8e60dffcad7354c929402df15e05f.scope - libcontainer container c2fcdb64adb78db969eca27849850091a2b8e60dffcad7354c929402df15e05f. Sep 16 04:27:41.303376 systemd[1]: Started cri-containerd-02b6a6665d7fe6657f9da84a3329b530286d945ba4b960ac06c3844a8f856a00.scope - libcontainer container 02b6a6665d7fe6657f9da84a3329b530286d945ba4b960ac06c3844a8f856a00. Sep 16 04:27:41.358035 containerd[1543]: time="2025-09-16T04:27:41.357944151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xfn8t,Uid:786e627c-4fad-4f3c-b296-cd9852333c74,Namespace:kube-system,Attempt:0,} returns sandbox id \"c2fcdb64adb78db969eca27849850091a2b8e60dffcad7354c929402df15e05f\"" Sep 16 04:27:41.364097 containerd[1543]: time="2025-09-16T04:27:41.363947650Z" level=info msg="CreateContainer within sandbox \"c2fcdb64adb78db969eca27849850091a2b8e60dffcad7354c929402df15e05f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 16 04:27:41.382716 containerd[1543]: time="2025-09-16T04:27:41.382008588Z" level=info msg="Container b72e7c62cd880b53babfb75dfa9ba96714adb73d0ebf840fb2a8812c36156a87: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:27:41.388798 containerd[1543]: time="2025-09-16T04:27:41.388739734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mv2gp,Uid:3d36bfd1-a8fe-48fb-8e64-b83a960860b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"02b6a6665d7fe6657f9da84a3329b530286d945ba4b960ac06c3844a8f856a00\"" Sep 16 04:27:41.392832 containerd[1543]: time="2025-09-16T04:27:41.392782614Z" level=info msg="CreateContainer within sandbox \"c2fcdb64adb78db969eca27849850091a2b8e60dffcad7354c929402df15e05f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b72e7c62cd880b53babfb75dfa9ba96714adb73d0ebf840fb2a8812c36156a87\"" Sep 16 04:27:41.393560 containerd[1543]: time="2025-09-16T04:27:41.393102897Z" level=info msg="CreateContainer within sandbox \"02b6a6665d7fe6657f9da84a3329b530286d945ba4b960ac06c3844a8f856a00\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 16 04:27:41.394464 containerd[1543]: time="2025-09-16T04:27:41.394402790Z" level=info msg="StartContainer for \"b72e7c62cd880b53babfb75dfa9ba96714adb73d0ebf840fb2a8812c36156a87\"" Sep 16 04:27:41.395646 containerd[1543]: time="2025-09-16T04:27:41.395596362Z" level=info msg="connecting to shim b72e7c62cd880b53babfb75dfa9ba96714adb73d0ebf840fb2a8812c36156a87" address="unix:///run/containerd/s/b374f3af720f1528bb67f00c8e719dd1e17e70ea48d06cfda2fcb6c49b6ac4c1" protocol=ttrpc version=3 Sep 16 04:27:41.406670 containerd[1543]: time="2025-09-16T04:27:41.406627670Z" level=info msg="Container 1bd42dcfbb4c9bf678ec9b0406d52a219b30c541f8f05a653250e441cbee4f27: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:27:41.418620 containerd[1543]: time="2025-09-16T04:27:41.418560708Z" level=info msg="CreateContainer within sandbox \"02b6a6665d7fe6657f9da84a3329b530286d945ba4b960ac06c3844a8f856a00\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1bd42dcfbb4c9bf678ec9b0406d52a219b30c541f8f05a653250e441cbee4f27\"" Sep 16 04:27:41.420469 containerd[1543]: time="2025-09-16T04:27:41.420403086Z" level=info msg="StartContainer for \"1bd42dcfbb4c9bf678ec9b0406d52a219b30c541f8f05a653250e441cbee4f27\"" Sep 16 04:27:41.422965 containerd[1543]: time="2025-09-16T04:27:41.422843550Z" level=info msg="connecting to shim 1bd42dcfbb4c9bf678ec9b0406d52a219b30c541f8f05a653250e441cbee4f27" address="unix:///run/containerd/s/c94ef4ffc9973d3b4d6a83b80f4d6e1dfd7b4af0880e22ec13e71659f5f18853" protocol=ttrpc version=3 Sep 16 04:27:41.424016 systemd[1]: Started cri-containerd-b72e7c62cd880b53babfb75dfa9ba96714adb73d0ebf840fb2a8812c36156a87.scope - libcontainer container b72e7c62cd880b53babfb75dfa9ba96714adb73d0ebf840fb2a8812c36156a87. Sep 16 04:27:41.457039 systemd[1]: Started cri-containerd-1bd42dcfbb4c9bf678ec9b0406d52a219b30c541f8f05a653250e441cbee4f27.scope - libcontainer container 1bd42dcfbb4c9bf678ec9b0406d52a219b30c541f8f05a653250e441cbee4f27. Sep 16 04:27:41.488714 containerd[1543]: time="2025-09-16T04:27:41.487934510Z" level=info msg="StartContainer for \"b72e7c62cd880b53babfb75dfa9ba96714adb73d0ebf840fb2a8812c36156a87\" returns successfully" Sep 16 04:27:41.510126 containerd[1543]: time="2025-09-16T04:27:41.510063728Z" level=info msg="StartContainer for \"1bd42dcfbb4c9bf678ec9b0406d52a219b30c541f8f05a653250e441cbee4f27\" returns successfully" Sep 16 04:27:41.554323 kubelet[2759]: I0916 04:27:41.553728 2759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-xfn8t" podStartSLOduration=39.553709838 podStartE2EDuration="39.553709838s" podCreationTimestamp="2025-09-16 04:27:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:27:41.552059622 +0000 UTC m=+46.371177679" watchObservedRunningTime="2025-09-16 04:27:41.553709838 +0000 UTC m=+46.372827895" Sep 16 04:27:41.570593 kubelet[2759]: I0916 04:27:41.570522 2759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-mv2gp" podStartSLOduration=39.570501323 podStartE2EDuration="39.570501323s" podCreationTimestamp="2025-09-16 04:27:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:27:41.569062989 +0000 UTC m=+46.388181046" watchObservedRunningTime="2025-09-16 04:27:41.570501323 +0000 UTC m=+46.389619420" Sep 16 04:27:42.214033 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4129393053.mount: Deactivated successfully. Sep 16 04:29:20.511449 systemd[1]: Started sshd@7-138.199.234.3:22-139.178.89.65:55180.service - OpenSSH per-connection server daemon (139.178.89.65:55180). Sep 16 04:29:21.509525 sshd[4088]: Accepted publickey for core from 139.178.89.65 port 55180 ssh2: RSA SHA256:hnZQROmedaG+reQAaWvmG41QCRiTlF3QrQA4Qzar5jk Sep 16 04:29:21.512192 sshd-session[4088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:29:21.521156 systemd-logind[1522]: New session 8 of user core. Sep 16 04:29:21.526937 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 16 04:29:22.274679 sshd[4091]: Connection closed by 139.178.89.65 port 55180 Sep 16 04:29:22.275586 sshd-session[4088]: pam_unix(sshd:session): session closed for user core Sep 16 04:29:22.282361 systemd-logind[1522]: Session 8 logged out. Waiting for processes to exit. Sep 16 04:29:22.282624 systemd[1]: sshd@7-138.199.234.3:22-139.178.89.65:55180.service: Deactivated successfully. Sep 16 04:29:22.286791 systemd[1]: session-8.scope: Deactivated successfully. Sep 16 04:29:22.290322 systemd-logind[1522]: Removed session 8. Sep 16 04:29:27.451659 systemd[1]: Started sshd@8-138.199.234.3:22-139.178.89.65:55184.service - OpenSSH per-connection server daemon (139.178.89.65:55184). Sep 16 04:29:28.438786 sshd[4104]: Accepted publickey for core from 139.178.89.65 port 55184 ssh2: RSA SHA256:hnZQROmedaG+reQAaWvmG41QCRiTlF3QrQA4Qzar5jk Sep 16 04:29:28.441031 sshd-session[4104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:29:28.446530 systemd-logind[1522]: New session 9 of user core. Sep 16 04:29:28.453052 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 16 04:29:29.189388 sshd[4107]: Connection closed by 139.178.89.65 port 55184 Sep 16 04:29:29.190151 sshd-session[4104]: pam_unix(sshd:session): session closed for user core Sep 16 04:29:29.195906 systemd[1]: sshd@8-138.199.234.3:22-139.178.89.65:55184.service: Deactivated successfully. Sep 16 04:29:29.197817 systemd[1]: session-9.scope: Deactivated successfully. Sep 16 04:29:29.200096 systemd-logind[1522]: Session 9 logged out. Waiting for processes to exit. Sep 16 04:29:29.202331 systemd-logind[1522]: Removed session 9. Sep 16 04:29:34.363985 systemd[1]: Started sshd@9-138.199.234.3:22-139.178.89.65:46574.service - OpenSSH per-connection server daemon (139.178.89.65:46574). Sep 16 04:29:35.366063 sshd[4123]: Accepted publickey for core from 139.178.89.65 port 46574 ssh2: RSA SHA256:hnZQROmedaG+reQAaWvmG41QCRiTlF3QrQA4Qzar5jk Sep 16 04:29:35.368068 sshd-session[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:29:35.373499 systemd-logind[1522]: New session 10 of user core. Sep 16 04:29:35.383082 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 16 04:29:36.130098 sshd[4126]: Connection closed by 139.178.89.65 port 46574 Sep 16 04:29:36.131012 sshd-session[4123]: pam_unix(sshd:session): session closed for user core Sep 16 04:29:36.137656 systemd[1]: sshd@9-138.199.234.3:22-139.178.89.65:46574.service: Deactivated successfully. Sep 16 04:29:36.140009 systemd[1]: session-10.scope: Deactivated successfully. Sep 16 04:29:36.141068 systemd-logind[1522]: Session 10 logged out. Waiting for processes to exit. Sep 16 04:29:36.142485 systemd-logind[1522]: Removed session 10. Sep 16 04:29:36.304992 systemd[1]: Started sshd@10-138.199.234.3:22-139.178.89.65:46588.service - OpenSSH per-connection server daemon (139.178.89.65:46588). Sep 16 04:29:37.320793 sshd[4139]: Accepted publickey for core from 139.178.89.65 port 46588 ssh2: RSA SHA256:hnZQROmedaG+reQAaWvmG41QCRiTlF3QrQA4Qzar5jk Sep 16 04:29:37.321727 sshd-session[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:29:37.328427 systemd-logind[1522]: New session 11 of user core. Sep 16 04:29:37.340057 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 16 04:29:38.149941 sshd[4142]: Connection closed by 139.178.89.65 port 46588 Sep 16 04:29:38.150538 sshd-session[4139]: pam_unix(sshd:session): session closed for user core Sep 16 04:29:38.156977 systemd[1]: sshd@10-138.199.234.3:22-139.178.89.65:46588.service: Deactivated successfully. Sep 16 04:29:38.160478 systemd[1]: session-11.scope: Deactivated successfully. Sep 16 04:29:38.161661 systemd-logind[1522]: Session 11 logged out. Waiting for processes to exit. Sep 16 04:29:38.163218 systemd-logind[1522]: Removed session 11. Sep 16 04:29:38.336243 systemd[1]: Started sshd@11-138.199.234.3:22-139.178.89.65:46596.service - OpenSSH per-connection server daemon (139.178.89.65:46596). Sep 16 04:29:39.390791 sshd[4153]: Accepted publickey for core from 139.178.89.65 port 46596 ssh2: RSA SHA256:hnZQROmedaG+reQAaWvmG41QCRiTlF3QrQA4Qzar5jk Sep 16 04:29:39.392665 sshd-session[4153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:29:39.397784 systemd-logind[1522]: New session 12 of user core. Sep 16 04:29:39.406229 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 16 04:29:40.175830 sshd[4156]: Connection closed by 139.178.89.65 port 46596 Sep 16 04:29:40.176666 sshd-session[4153]: pam_unix(sshd:session): session closed for user core Sep 16 04:29:40.181743 systemd[1]: sshd@11-138.199.234.3:22-139.178.89.65:46596.service: Deactivated successfully. Sep 16 04:29:40.184099 systemd[1]: session-12.scope: Deactivated successfully. Sep 16 04:29:40.185920 systemd-logind[1522]: Session 12 logged out. Waiting for processes to exit. Sep 16 04:29:40.188358 systemd-logind[1522]: Removed session 12. Sep 16 04:29:45.353008 systemd[1]: Started sshd@12-138.199.234.3:22-139.178.89.65:36810.service - OpenSSH per-connection server daemon (139.178.89.65:36810). Sep 16 04:29:46.346533 sshd[4168]: Accepted publickey for core from 139.178.89.65 port 36810 ssh2: RSA SHA256:hnZQROmedaG+reQAaWvmG41QCRiTlF3QrQA4Qzar5jk Sep 16 04:29:46.348607 sshd-session[4168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:29:46.353896 systemd-logind[1522]: New session 13 of user core. Sep 16 04:29:46.364086 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 16 04:29:47.100657 sshd[4171]: Connection closed by 139.178.89.65 port 36810 Sep 16 04:29:47.101567 sshd-session[4168]: pam_unix(sshd:session): session closed for user core Sep 16 04:29:47.107455 systemd[1]: sshd@12-138.199.234.3:22-139.178.89.65:36810.service: Deactivated successfully. Sep 16 04:29:47.109912 systemd[1]: session-13.scope: Deactivated successfully. Sep 16 04:29:47.110943 systemd-logind[1522]: Session 13 logged out. Waiting for processes to exit. Sep 16 04:29:47.112726 systemd-logind[1522]: Removed session 13. Sep 16 04:29:47.274690 systemd[1]: Started sshd@13-138.199.234.3:22-139.178.89.65:36812.service - OpenSSH per-connection server daemon (139.178.89.65:36812). Sep 16 04:29:48.285440 sshd[4182]: Accepted publickey for core from 139.178.89.65 port 36812 ssh2: RSA SHA256:hnZQROmedaG+reQAaWvmG41QCRiTlF3QrQA4Qzar5jk Sep 16 04:29:48.288716 sshd-session[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:29:48.295727 systemd-logind[1522]: New session 14 of user core. Sep 16 04:29:48.299975 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 16 04:29:49.082778 sshd[4185]: Connection closed by 139.178.89.65 port 36812 Sep 16 04:29:49.081832 sshd-session[4182]: pam_unix(sshd:session): session closed for user core Sep 16 04:29:49.087646 systemd-logind[1522]: Session 14 logged out. Waiting for processes to exit. Sep 16 04:29:49.087853 systemd[1]: sshd@13-138.199.234.3:22-139.178.89.65:36812.service: Deactivated successfully. Sep 16 04:29:49.090170 systemd[1]: session-14.scope: Deactivated successfully. Sep 16 04:29:49.092561 systemd-logind[1522]: Removed session 14. Sep 16 04:29:49.253105 systemd[1]: Started sshd@14-138.199.234.3:22-139.178.89.65:36822.service - OpenSSH per-connection server daemon (139.178.89.65:36822). Sep 16 04:29:50.242017 sshd[4194]: Accepted publickey for core from 139.178.89.65 port 36822 ssh2: RSA SHA256:hnZQROmedaG+reQAaWvmG41QCRiTlF3QrQA4Qzar5jk Sep 16 04:29:50.244029 sshd-session[4194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:29:50.249902 systemd-logind[1522]: New session 15 of user core. Sep 16 04:29:50.253998 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 16 04:29:51.542032 sshd[4197]: Connection closed by 139.178.89.65 port 36822 Sep 16 04:29:51.542472 sshd-session[4194]: pam_unix(sshd:session): session closed for user core Sep 16 04:29:51.549103 systemd[1]: sshd@14-138.199.234.3:22-139.178.89.65:36822.service: Deactivated successfully. Sep 16 04:29:51.551842 systemd[1]: session-15.scope: Deactivated successfully. Sep 16 04:29:51.553397 systemd-logind[1522]: Session 15 logged out. Waiting for processes to exit. Sep 16 04:29:51.555362 systemd-logind[1522]: Removed session 15. Sep 16 04:29:51.722612 systemd[1]: Started sshd@15-138.199.234.3:22-139.178.89.65:45640.service - OpenSSH per-connection server daemon (139.178.89.65:45640). Sep 16 04:29:52.732618 sshd[4214]: Accepted publickey for core from 139.178.89.65 port 45640 ssh2: RSA SHA256:hnZQROmedaG+reQAaWvmG41QCRiTlF3QrQA4Qzar5jk Sep 16 04:29:52.735263 sshd-session[4214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:29:52.740769 systemd-logind[1522]: New session 16 of user core. Sep 16 04:29:52.748957 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 16 04:29:53.613231 sshd[4217]: Connection closed by 139.178.89.65 port 45640 Sep 16 04:29:53.612472 sshd-session[4214]: pam_unix(sshd:session): session closed for user core Sep 16 04:29:53.617569 systemd[1]: sshd@15-138.199.234.3:22-139.178.89.65:45640.service: Deactivated successfully. Sep 16 04:29:53.619689 systemd[1]: session-16.scope: Deactivated successfully. Sep 16 04:29:53.622952 systemd-logind[1522]: Session 16 logged out. Waiting for processes to exit. Sep 16 04:29:53.624724 systemd-logind[1522]: Removed session 16. Sep 16 04:29:53.792354 systemd[1]: Started sshd@16-138.199.234.3:22-139.178.89.65:45652.service - OpenSSH per-connection server daemon (139.178.89.65:45652). Sep 16 04:29:54.808445 sshd[4226]: Accepted publickey for core from 139.178.89.65 port 45652 ssh2: RSA SHA256:hnZQROmedaG+reQAaWvmG41QCRiTlF3QrQA4Qzar5jk Sep 16 04:29:54.811287 sshd-session[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:29:54.818972 systemd-logind[1522]: New session 17 of user core. Sep 16 04:29:54.827267 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 16 04:29:55.573077 sshd[4229]: Connection closed by 139.178.89.65 port 45652 Sep 16 04:29:55.573961 sshd-session[4226]: pam_unix(sshd:session): session closed for user core Sep 16 04:29:55.579668 systemd[1]: sshd@16-138.199.234.3:22-139.178.89.65:45652.service: Deactivated successfully. Sep 16 04:29:55.584366 systemd[1]: session-17.scope: Deactivated successfully. Sep 16 04:29:55.587856 systemd-logind[1522]: Session 17 logged out. Waiting for processes to exit. Sep 16 04:29:55.590258 systemd-logind[1522]: Removed session 17. Sep 16 04:30:00.746308 systemd[1]: Started sshd@17-138.199.234.3:22-139.178.89.65:52624.service - OpenSSH per-connection server daemon (139.178.89.65:52624). Sep 16 04:30:01.754598 sshd[4244]: Accepted publickey for core from 139.178.89.65 port 52624 ssh2: RSA SHA256:hnZQROmedaG+reQAaWvmG41QCRiTlF3QrQA4Qzar5jk Sep 16 04:30:01.756941 sshd-session[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:30:01.762259 systemd-logind[1522]: New session 18 of user core. Sep 16 04:30:01.770059 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 16 04:30:02.510069 sshd[4247]: Connection closed by 139.178.89.65 port 52624 Sep 16 04:30:02.511190 sshd-session[4244]: pam_unix(sshd:session): session closed for user core Sep 16 04:30:02.516500 systemd[1]: sshd@17-138.199.234.3:22-139.178.89.65:52624.service: Deactivated successfully. Sep 16 04:30:02.520462 systemd[1]: session-18.scope: Deactivated successfully. Sep 16 04:30:02.523048 systemd-logind[1522]: Session 18 logged out. Waiting for processes to exit. Sep 16 04:30:02.526130 systemd-logind[1522]: Removed session 18. Sep 16 04:30:07.688383 systemd[1]: Started sshd@18-138.199.234.3:22-139.178.89.65:52634.service - OpenSSH per-connection server daemon (139.178.89.65:52634). Sep 16 04:30:08.703506 sshd[4260]: Accepted publickey for core from 139.178.89.65 port 52634 ssh2: RSA SHA256:hnZQROmedaG+reQAaWvmG41QCRiTlF3QrQA4Qzar5jk Sep 16 04:30:08.705850 sshd-session[4260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:30:08.712308 systemd-logind[1522]: New session 19 of user core. Sep 16 04:30:08.720065 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 16 04:30:09.462138 sshd[4263]: Connection closed by 139.178.89.65 port 52634 Sep 16 04:30:09.463238 sshd-session[4260]: pam_unix(sshd:session): session closed for user core Sep 16 04:30:09.468245 systemd-logind[1522]: Session 19 logged out. Waiting for processes to exit. Sep 16 04:30:09.468789 systemd[1]: sshd@18-138.199.234.3:22-139.178.89.65:52634.service: Deactivated successfully. Sep 16 04:30:09.472353 systemd[1]: session-19.scope: Deactivated successfully. Sep 16 04:30:09.474939 systemd-logind[1522]: Removed session 19. Sep 16 04:30:09.637187 systemd[1]: Started sshd@19-138.199.234.3:22-139.178.89.65:52648.service - OpenSSH per-connection server daemon (139.178.89.65:52648). Sep 16 04:30:10.653179 sshd[4275]: Accepted publickey for core from 139.178.89.65 port 52648 ssh2: RSA SHA256:hnZQROmedaG+reQAaWvmG41QCRiTlF3QrQA4Qzar5jk Sep 16 04:30:10.654844 sshd-session[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:30:10.661372 systemd-logind[1522]: New session 20 of user core. Sep 16 04:30:10.666985 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 16 04:30:13.662156 containerd[1543]: time="2025-09-16T04:30:13.662096909Z" level=info msg="StopContainer for \"8918cd11c025fa3b7e0483898c1d829046c5938140e48874405f2b8f3784f0f8\" with timeout 30 (s)" Sep 16 04:30:13.664787 containerd[1543]: time="2025-09-16T04:30:13.664033179Z" level=info msg="Stop container \"8918cd11c025fa3b7e0483898c1d829046c5938140e48874405f2b8f3784f0f8\" with signal terminated" Sep 16 04:30:13.677200 containerd[1543]: time="2025-09-16T04:30:13.676848180Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 16 04:30:13.684197 containerd[1543]: time="2025-09-16T04:30:13.684160095Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d7c2f361ed818f4929a50c5b3b1f2163971d0bf08e2168cfdb55b3bfc274fc1d\" id:\"3fcae3ccacd5ff4dc7670de150b6d7a149f7c073347f099209f167790da895b5\" pid:4298 exited_at:{seconds:1757997013 nanos:683779729}" Sep 16 04:30:13.686321 containerd[1543]: time="2025-09-16T04:30:13.686289048Z" level=info msg="StopContainer for \"d7c2f361ed818f4929a50c5b3b1f2163971d0bf08e2168cfdb55b3bfc274fc1d\" with timeout 2 (s)" Sep 16 04:30:13.687080 containerd[1543]: time="2025-09-16T04:30:13.687026580Z" level=info msg="Stop container \"d7c2f361ed818f4929a50c5b3b1f2163971d0bf08e2168cfdb55b3bfc274fc1d\" with signal terminated" Sep 16 04:30:13.689162 systemd[1]: cri-containerd-8918cd11c025fa3b7e0483898c1d829046c5938140e48874405f2b8f3784f0f8.scope: Deactivated successfully. Sep 16 04:30:13.691889 containerd[1543]: time="2025-09-16T04:30:13.691815055Z" level=info msg="received exit event container_id:\"8918cd11c025fa3b7e0483898c1d829046c5938140e48874405f2b8f3784f0f8\" id:\"8918cd11c025fa3b7e0483898c1d829046c5938140e48874405f2b8f3784f0f8\" pid:3158 exited_at:{seconds:1757997013 nanos:691470969}" Sep 16 04:30:13.692721 containerd[1543]: time="2025-09-16T04:30:13.692686788Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8918cd11c025fa3b7e0483898c1d829046c5938140e48874405f2b8f3784f0f8\" id:\"8918cd11c025fa3b7e0483898c1d829046c5938140e48874405f2b8f3784f0f8\" pid:3158 exited_at:{seconds:1757997013 nanos:691470969}" Sep 16 04:30:13.702264 systemd-networkd[1409]: lxc_health: Link DOWN Sep 16 04:30:13.702271 systemd-networkd[1409]: lxc_health: Lost carrier Sep 16 04:30:13.730126 systemd[1]: cri-containerd-d7c2f361ed818f4929a50c5b3b1f2163971d0bf08e2168cfdb55b3bfc274fc1d.scope: Deactivated successfully. Sep 16 04:30:13.730497 systemd[1]: cri-containerd-d7c2f361ed818f4929a50c5b3b1f2163971d0bf08e2168cfdb55b3bfc274fc1d.scope: Consumed 7.333s CPU time, 126.2M memory peak, 120K read from disk, 12.9M written to disk. Sep 16 04:30:13.736198 containerd[1543]: time="2025-09-16T04:30:13.736106389Z" level=info msg="received exit event container_id:\"d7c2f361ed818f4929a50c5b3b1f2163971d0bf08e2168cfdb55b3bfc274fc1d\" id:\"d7c2f361ed818f4929a50c5b3b1f2163971d0bf08e2168cfdb55b3bfc274fc1d\" pid:3396 exited_at:{seconds:1757997013 nanos:734005156}" Sep 16 04:30:13.737860 containerd[1543]: time="2025-09-16T04:30:13.737731214Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d7c2f361ed818f4929a50c5b3b1f2163971d0bf08e2168cfdb55b3bfc274fc1d\" id:\"d7c2f361ed818f4929a50c5b3b1f2163971d0bf08e2168cfdb55b3bfc274fc1d\" pid:3396 exited_at:{seconds:1757997013 nanos:734005156}" Sep 16 04:30:13.756045 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8918cd11c025fa3b7e0483898c1d829046c5938140e48874405f2b8f3784f0f8-rootfs.mount: Deactivated successfully. Sep 16 04:30:13.766933 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7c2f361ed818f4929a50c5b3b1f2163971d0bf08e2168cfdb55b3bfc274fc1d-rootfs.mount: Deactivated successfully. Sep 16 04:30:13.772854 containerd[1543]: time="2025-09-16T04:30:13.772819084Z" level=info msg="StopContainer for \"8918cd11c025fa3b7e0483898c1d829046c5938140e48874405f2b8f3784f0f8\" returns successfully" Sep 16 04:30:13.776818 containerd[1543]: time="2025-09-16T04:30:13.776745346Z" level=info msg="StopContainer for \"d7c2f361ed818f4929a50c5b3b1f2163971d0bf08e2168cfdb55b3bfc274fc1d\" returns successfully" Sep 16 04:30:13.777102 containerd[1543]: time="2025-09-16T04:30:13.776859267Z" level=info msg="StopPodSandbox for \"4791e40dca24d0b745a47ee23bf4e9c91c20d768ee592161dcc135ab82e3d4c8\"" Sep 16 04:30:13.777392 containerd[1543]: time="2025-09-16T04:30:13.777369195Z" level=info msg="Container to stop \"8918cd11c025fa3b7e0483898c1d829046c5938140e48874405f2b8f3784f0f8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 04:30:13.777844 containerd[1543]: time="2025-09-16T04:30:13.777659280Z" level=info msg="StopPodSandbox for \"6542c9d2c0678b2c4a55eb0926bec8cede83e45cd96ef0d523b2921c2c3d9062\"" Sep 16 04:30:13.778463 containerd[1543]: time="2025-09-16T04:30:13.778435172Z" level=info msg="Container to stop \"26c3ad314465a7e82e58bd9e4be70bda75dc8a216ce45af86f307ad8518dc083\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 04:30:13.778516 containerd[1543]: time="2025-09-16T04:30:13.778466212Z" level=info msg="Container to stop \"8eb60dd0af894d371d1d55cec279aaab375ada63989f51c128e6a50646d14442\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 04:30:13.778516 containerd[1543]: time="2025-09-16T04:30:13.778477893Z" level=info msg="Container to stop \"c7a2c37e12dedb3285087599d3770c2d0b64bbc0944a934c1e3dade51f085ea8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 04:30:13.778516 containerd[1543]: time="2025-09-16T04:30:13.778502413Z" level=info msg="Container to stop \"d1dd58914350a2f858c9fd1cf767c251b0edc2e94d63ce9e9bbb2b0fe07d95c9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 04:30:13.778580 containerd[1543]: time="2025-09-16T04:30:13.778517613Z" level=info msg="Container to stop \"d7c2f361ed818f4929a50c5b3b1f2163971d0bf08e2168cfdb55b3bfc274fc1d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 04:30:13.790357 systemd[1]: cri-containerd-6542c9d2c0678b2c4a55eb0926bec8cede83e45cd96ef0d523b2921c2c3d9062.scope: Deactivated successfully. Sep 16 04:30:13.795487 systemd[1]: cri-containerd-4791e40dca24d0b745a47ee23bf4e9c91c20d768ee592161dcc135ab82e3d4c8.scope: Deactivated successfully. Sep 16 04:30:13.798427 containerd[1543]: time="2025-09-16T04:30:13.798328884Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4791e40dca24d0b745a47ee23bf4e9c91c20d768ee592161dcc135ab82e3d4c8\" id:\"4791e40dca24d0b745a47ee23bf4e9c91c20d768ee592161dcc135ab82e3d4c8\" pid:3068 exit_status:137 exited_at:{seconds:1757997013 nanos:797095384}" Sep 16 04:30:13.828041 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6542c9d2c0678b2c4a55eb0926bec8cede83e45cd96ef0d523b2921c2c3d9062-rootfs.mount: Deactivated successfully. Sep 16 04:30:13.835261 containerd[1543]: time="2025-09-16T04:30:13.835030259Z" level=info msg="shim disconnected" id=6542c9d2c0678b2c4a55eb0926bec8cede83e45cd96ef0d523b2921c2c3d9062 namespace=k8s.io Sep 16 04:30:13.835261 containerd[1543]: time="2025-09-16T04:30:13.835064459Z" level=warning msg="cleaning up after shim disconnected" id=6542c9d2c0678b2c4a55eb0926bec8cede83e45cd96ef0d523b2921c2c3d9062 namespace=k8s.io Sep 16 04:30:13.835261 containerd[1543]: time="2025-09-16T04:30:13.835094580Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 16 04:30:13.843358 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4791e40dca24d0b745a47ee23bf4e9c91c20d768ee592161dcc135ab82e3d4c8-rootfs.mount: Deactivated successfully. Sep 16 04:30:13.846594 containerd[1543]: time="2025-09-16T04:30:13.846558479Z" level=info msg="shim disconnected" id=4791e40dca24d0b745a47ee23bf4e9c91c20d768ee592161dcc135ab82e3d4c8 namespace=k8s.io Sep 16 04:30:13.846721 containerd[1543]: time="2025-09-16T04:30:13.846593120Z" level=warning msg="cleaning up after shim disconnected" id=4791e40dca24d0b745a47ee23bf4e9c91c20d768ee592161dcc135ab82e3d4c8 namespace=k8s.io Sep 16 04:30:13.846721 containerd[1543]: time="2025-09-16T04:30:13.846623120Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 16 04:30:13.859807 containerd[1543]: time="2025-09-16T04:30:13.859668685Z" level=info msg="received exit event sandbox_id:\"6542c9d2c0678b2c4a55eb0926bec8cede83e45cd96ef0d523b2921c2c3d9062\" exit_status:137 exited_at:{seconds:1757997013 nanos:798033599}" Sep 16 04:30:13.860638 containerd[1543]: time="2025-09-16T04:30:13.860606060Z" level=info msg="received exit event sandbox_id:\"4791e40dca24d0b745a47ee23bf4e9c91c20d768ee592161dcc135ab82e3d4c8\" exit_status:137 exited_at:{seconds:1757997013 nanos:797095384}" Sep 16 04:30:13.863550 containerd[1543]: time="2025-09-16T04:30:13.860861944Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6542c9d2c0678b2c4a55eb0926bec8cede83e45cd96ef0d523b2921c2c3d9062\" id:\"6542c9d2c0678b2c4a55eb0926bec8cede83e45cd96ef0d523b2921c2c3d9062\" pid:3113 exit_status:137 exited_at:{seconds:1757997013 nanos:798033599}" Sep 16 04:30:13.863551 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4791e40dca24d0b745a47ee23bf4e9c91c20d768ee592161dcc135ab82e3d4c8-shm.mount: Deactivated successfully. Sep 16 04:30:13.864852 containerd[1543]: time="2025-09-16T04:30:13.863881071Z" level=info msg="TearDown network for sandbox \"4791e40dca24d0b745a47ee23bf4e9c91c20d768ee592161dcc135ab82e3d4c8\" successfully" Sep 16 04:30:13.864943 containerd[1543]: time="2025-09-16T04:30:13.864929927Z" level=info msg="StopPodSandbox for \"4791e40dca24d0b745a47ee23bf4e9c91c20d768ee592161dcc135ab82e3d4c8\" returns successfully" Sep 16 04:30:13.865124 containerd[1543]: time="2025-09-16T04:30:13.863922231Z" level=info msg="TearDown network for sandbox \"6542c9d2c0678b2c4a55eb0926bec8cede83e45cd96ef0d523b2921c2c3d9062\" successfully" Sep 16 04:30:13.865988 containerd[1543]: time="2025-09-16T04:30:13.865954503Z" level=info msg="StopPodSandbox for \"6542c9d2c0678b2c4a55eb0926bec8cede83e45cd96ef0d523b2921c2c3d9062\" returns successfully" Sep 16 04:30:13.904166 kubelet[2759]: I0916 04:30:13.904106 2759 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e641ff89-8106-4546-9ca0-351528c24d00-cilium-config-path\") pod \"e641ff89-8106-4546-9ca0-351528c24d00\" (UID: \"e641ff89-8106-4546-9ca0-351528c24d00\") " Sep 16 04:30:13.906479 kubelet[2759]: I0916 04:30:13.905085 2759 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4n2t\" (UniqueName: \"kubernetes.io/projected/e641ff89-8106-4546-9ca0-351528c24d00-kube-api-access-t4n2t\") pod \"e641ff89-8106-4546-9ca0-351528c24d00\" (UID: \"e641ff89-8106-4546-9ca0-351528c24d00\") " Sep 16 04:30:13.909075 kubelet[2759]: I0916 04:30:13.909007 2759 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e641ff89-8106-4546-9ca0-351528c24d00-kube-api-access-t4n2t" (OuterVolumeSpecName: "kube-api-access-t4n2t") pod "e641ff89-8106-4546-9ca0-351528c24d00" (UID: "e641ff89-8106-4546-9ca0-351528c24d00"). InnerVolumeSpecName "kube-api-access-t4n2t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 16 04:30:13.909581 kubelet[2759]: I0916 04:30:13.909535 2759 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e641ff89-8106-4546-9ca0-351528c24d00-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e641ff89-8106-4546-9ca0-351528c24d00" (UID: "e641ff89-8106-4546-9ca0-351528c24d00"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 16 04:30:13.943442 kubelet[2759]: I0916 04:30:13.942894 2759 scope.go:117] "RemoveContainer" containerID="8918cd11c025fa3b7e0483898c1d829046c5938140e48874405f2b8f3784f0f8" Sep 16 04:30:13.947857 containerd[1543]: time="2025-09-16T04:30:13.947723945Z" level=info msg="RemoveContainer for \"8918cd11c025fa3b7e0483898c1d829046c5938140e48874405f2b8f3784f0f8\"" Sep 16 04:30:13.950588 systemd[1]: Removed slice kubepods-besteffort-pode641ff89_8106_4546_9ca0_351528c24d00.slice - libcontainer container kubepods-besteffort-pode641ff89_8106_4546_9ca0_351528c24d00.slice. Sep 16 04:30:13.962215 containerd[1543]: time="2025-09-16T04:30:13.961053473Z" level=info msg="RemoveContainer for \"8918cd11c025fa3b7e0483898c1d829046c5938140e48874405f2b8f3784f0f8\" returns successfully" Sep 16 04:30:13.962425 kubelet[2759]: I0916 04:30:13.961861 2759 scope.go:117] "RemoveContainer" containerID="8918cd11c025fa3b7e0483898c1d829046c5938140e48874405f2b8f3784f0f8" Sep 16 04:30:13.963716 containerd[1543]: time="2025-09-16T04:30:13.963654154Z" level=error msg="ContainerStatus for \"8918cd11c025fa3b7e0483898c1d829046c5938140e48874405f2b8f3784f0f8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8918cd11c025fa3b7e0483898c1d829046c5938140e48874405f2b8f3784f0f8\": not found" Sep 16 04:30:13.963930 kubelet[2759]: E0916 04:30:13.963897 2759 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8918cd11c025fa3b7e0483898c1d829046c5938140e48874405f2b8f3784f0f8\": not found" containerID="8918cd11c025fa3b7e0483898c1d829046c5938140e48874405f2b8f3784f0f8" Sep 16 04:30:13.964045 kubelet[2759]: I0916 04:30:13.963938 2759 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8918cd11c025fa3b7e0483898c1d829046c5938140e48874405f2b8f3784f0f8"} err="failed to get container status \"8918cd11c025fa3b7e0483898c1d829046c5938140e48874405f2b8f3784f0f8\": rpc error: code = NotFound desc = an error occurred when try to find container \"8918cd11c025fa3b7e0483898c1d829046c5938140e48874405f2b8f3784f0f8\": not found" Sep 16 04:30:13.965764 kubelet[2759]: I0916 04:30:13.965725 2759 scope.go:117] "RemoveContainer" containerID="d7c2f361ed818f4929a50c5b3b1f2163971d0bf08e2168cfdb55b3bfc274fc1d" Sep 16 04:30:13.973305 containerd[1543]: time="2025-09-16T04:30:13.971991645Z" level=info msg="RemoveContainer for \"d7c2f361ed818f4929a50c5b3b1f2163971d0bf08e2168cfdb55b3bfc274fc1d\"" Sep 16 04:30:13.979013 containerd[1543]: time="2025-09-16T04:30:13.978970714Z" level=info msg="RemoveContainer for \"d7c2f361ed818f4929a50c5b3b1f2163971d0bf08e2168cfdb55b3bfc274fc1d\" returns successfully" Sep 16 04:30:13.979406 kubelet[2759]: I0916 04:30:13.979378 2759 scope.go:117] "RemoveContainer" containerID="d1dd58914350a2f858c9fd1cf767c251b0edc2e94d63ce9e9bbb2b0fe07d95c9" Sep 16 04:30:13.981471 containerd[1543]: time="2025-09-16T04:30:13.981361352Z" level=info msg="RemoveContainer for \"d1dd58914350a2f858c9fd1cf767c251b0edc2e94d63ce9e9bbb2b0fe07d95c9\"" Sep 16 04:30:13.988546 containerd[1543]: time="2025-09-16T04:30:13.988490863Z" level=info msg="RemoveContainer for \"d1dd58914350a2f858c9fd1cf767c251b0edc2e94d63ce9e9bbb2b0fe07d95c9\" returns successfully" Sep 16 04:30:13.989229 kubelet[2759]: I0916 04:30:13.989193 2759 scope.go:117] "RemoveContainer" containerID="c7a2c37e12dedb3285087599d3770c2d0b64bbc0944a934c1e3dade51f085ea8" Sep 16 04:30:13.993085 containerd[1543]: time="2025-09-16T04:30:13.992980094Z" level=info msg="RemoveContainer for \"c7a2c37e12dedb3285087599d3770c2d0b64bbc0944a934c1e3dade51f085ea8\"" Sep 16 04:30:14.002435 containerd[1543]: time="2025-09-16T04:30:14.001733951Z" level=info msg="RemoveContainer for \"c7a2c37e12dedb3285087599d3770c2d0b64bbc0944a934c1e3dade51f085ea8\" returns successfully" Sep 16 04:30:14.002706 kubelet[2759]: I0916 04:30:14.002648 2759 scope.go:117] "RemoveContainer" containerID="8eb60dd0af894d371d1d55cec279aaab375ada63989f51c128e6a50646d14442" Sep 16 04:30:14.007088 kubelet[2759]: I0916 04:30:14.007000 2759 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/22b20782-77b0-43d9-a5ec-de471c3bdf2a-cilium-run\") pod \"22b20782-77b0-43d9-a5ec-de471c3bdf2a\" (UID: \"22b20782-77b0-43d9-a5ec-de471c3bdf2a\") " Sep 16 04:30:14.007088 kubelet[2759]: I0916 04:30:14.007050 2759 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/22b20782-77b0-43d9-a5ec-de471c3bdf2a-hostproc\") pod \"22b20782-77b0-43d9-a5ec-de471c3bdf2a\" (UID: \"22b20782-77b0-43d9-a5ec-de471c3bdf2a\") " Sep 16 04:30:14.007635 containerd[1543]: time="2025-09-16T04:30:14.007028394Z" level=info msg="RemoveContainer for \"8eb60dd0af894d371d1d55cec279aaab375ada63989f51c128e6a50646d14442\"" Sep 16 04:30:14.007849 kubelet[2759]: I0916 04:30:14.007196 2759 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/22b20782-77b0-43d9-a5ec-de471c3bdf2a-host-proc-sys-net\") pod \"22b20782-77b0-43d9-a5ec-de471c3bdf2a\" (UID: \"22b20782-77b0-43d9-a5ec-de471c3bdf2a\") " Sep 16 04:30:14.007849 kubelet[2759]: I0916 04:30:14.007103 2759 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22b20782-77b0-43d9-a5ec-de471c3bdf2a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "22b20782-77b0-43d9-a5ec-de471c3bdf2a" (UID: "22b20782-77b0-43d9-a5ec-de471c3bdf2a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 04:30:14.007849 kubelet[2759]: I0916 04:30:14.007197 2759 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22b20782-77b0-43d9-a5ec-de471c3bdf2a-hostproc" (OuterVolumeSpecName: "hostproc") pod "22b20782-77b0-43d9-a5ec-de471c3bdf2a" (UID: "22b20782-77b0-43d9-a5ec-de471c3bdf2a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 04:30:14.007849 kubelet[2759]: I0916 04:30:14.007215 2759 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22b20782-77b0-43d9-a5ec-de471c3bdf2a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "22b20782-77b0-43d9-a5ec-de471c3bdf2a" (UID: "22b20782-77b0-43d9-a5ec-de471c3bdf2a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 04:30:14.007849 kubelet[2759]: I0916 04:30:14.007366 2759 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/22b20782-77b0-43d9-a5ec-de471c3bdf2a-etc-cni-netd\") pod \"22b20782-77b0-43d9-a5ec-de471c3bdf2a\" (UID: \"22b20782-77b0-43d9-a5ec-de471c3bdf2a\") " Sep 16 04:30:14.008226 kubelet[2759]: I0916 04:30:14.007376 2759 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22b20782-77b0-43d9-a5ec-de471c3bdf2a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "22b20782-77b0-43d9-a5ec-de471c3bdf2a" (UID: "22b20782-77b0-43d9-a5ec-de471c3bdf2a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 04:30:14.008226 kubelet[2759]: I0916 04:30:14.007519 2759 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22b20782-77b0-43d9-a5ec-de471c3bdf2a-xtables-lock\") pod \"22b20782-77b0-43d9-a5ec-de471c3bdf2a\" (UID: \"22b20782-77b0-43d9-a5ec-de471c3bdf2a\") " Sep 16 04:30:14.008226 kubelet[2759]: I0916 04:30:14.007530 2759 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22b20782-77b0-43d9-a5ec-de471c3bdf2a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "22b20782-77b0-43d9-a5ec-de471c3bdf2a" (UID: "22b20782-77b0-43d9-a5ec-de471c3bdf2a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 04:30:14.008226 kubelet[2759]: I0916 04:30:14.007635 2759 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22b20782-77b0-43d9-a5ec-de471c3bdf2a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "22b20782-77b0-43d9-a5ec-de471c3bdf2a" (UID: "22b20782-77b0-43d9-a5ec-de471c3bdf2a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 04:30:14.008226 kubelet[2759]: I0916 04:30:14.007566 2759 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/22b20782-77b0-43d9-a5ec-de471c3bdf2a-host-proc-sys-kernel\") pod \"22b20782-77b0-43d9-a5ec-de471c3bdf2a\" (UID: \"22b20782-77b0-43d9-a5ec-de471c3bdf2a\") " Sep 16 04:30:14.008412 kubelet[2759]: I0916 04:30:14.007863 2759 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22b20782-77b0-43d9-a5ec-de471c3bdf2a-cni-path" (OuterVolumeSpecName: "cni-path") pod "22b20782-77b0-43d9-a5ec-de471c3bdf2a" (UID: "22b20782-77b0-43d9-a5ec-de471c3bdf2a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 04:30:14.008412 kubelet[2759]: I0916 04:30:14.007920 2759 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/22b20782-77b0-43d9-a5ec-de471c3bdf2a-cni-path\") pod \"22b20782-77b0-43d9-a5ec-de471c3bdf2a\" (UID: \"22b20782-77b0-43d9-a5ec-de471c3bdf2a\") " Sep 16 04:30:14.009020 kubelet[2759]: I0916 04:30:14.008552 2759 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/22b20782-77b0-43d9-a5ec-de471c3bdf2a-cilium-config-path\") pod \"22b20782-77b0-43d9-a5ec-de471c3bdf2a\" (UID: \"22b20782-77b0-43d9-a5ec-de471c3bdf2a\") " Sep 16 04:30:14.009020 kubelet[2759]: I0916 04:30:14.008613 2759 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/22b20782-77b0-43d9-a5ec-de471c3bdf2a-cilium-cgroup\") pod \"22b20782-77b0-43d9-a5ec-de471c3bdf2a\" (UID: \"22b20782-77b0-43d9-a5ec-de471c3bdf2a\") " Sep 16 04:30:14.009020 kubelet[2759]: I0916 04:30:14.008641 2759 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22b20782-77b0-43d9-a5ec-de471c3bdf2a-lib-modules\") pod \"22b20782-77b0-43d9-a5ec-de471c3bdf2a\" (UID: \"22b20782-77b0-43d9-a5ec-de471c3bdf2a\") " Sep 16 04:30:14.009020 kubelet[2759]: I0916 04:30:14.008685 2759 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/22b20782-77b0-43d9-a5ec-de471c3bdf2a-hubble-tls\") pod \"22b20782-77b0-43d9-a5ec-de471c3bdf2a\" (UID: \"22b20782-77b0-43d9-a5ec-de471c3bdf2a\") " Sep 16 04:30:14.009020 kubelet[2759]: I0916 04:30:14.008714 2759 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/22b20782-77b0-43d9-a5ec-de471c3bdf2a-clustermesh-secrets\") pod \"22b20782-77b0-43d9-a5ec-de471c3bdf2a\" (UID: \"22b20782-77b0-43d9-a5ec-de471c3bdf2a\") " Sep 16 04:30:14.009020 kubelet[2759]: I0916 04:30:14.008797 2759 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhxjr\" (UniqueName: \"kubernetes.io/projected/22b20782-77b0-43d9-a5ec-de471c3bdf2a-kube-api-access-hhxjr\") pod \"22b20782-77b0-43d9-a5ec-de471c3bdf2a\" (UID: \"22b20782-77b0-43d9-a5ec-de471c3bdf2a\") " Sep 16 04:30:14.010392 kubelet[2759]: I0916 04:30:14.008857 2759 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/22b20782-77b0-43d9-a5ec-de471c3bdf2a-bpf-maps\") pod \"22b20782-77b0-43d9-a5ec-de471c3bdf2a\" (UID: \"22b20782-77b0-43d9-a5ec-de471c3bdf2a\") " Sep 16 04:30:14.010392 kubelet[2759]: I0916 04:30:14.008977 2759 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/22b20782-77b0-43d9-a5ec-de471c3bdf2a-host-proc-sys-net\") on node \"ci-4459-0-0-n-0223e12d7a\" DevicePath \"\"" Sep 16 04:30:14.010392 kubelet[2759]: I0916 04:30:14.009142 2759 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t4n2t\" (UniqueName: \"kubernetes.io/projected/e641ff89-8106-4546-9ca0-351528c24d00-kube-api-access-t4n2t\") on node \"ci-4459-0-0-n-0223e12d7a\" DevicePath \"\"" Sep 16 04:30:14.010392 kubelet[2759]: I0916 04:30:14.009160 2759 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22b20782-77b0-43d9-a5ec-de471c3bdf2a-xtables-lock\") on node \"ci-4459-0-0-n-0223e12d7a\" DevicePath \"\"" Sep 16 04:30:14.010392 kubelet[2759]: I0916 04:30:14.009433 2759 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/22b20782-77b0-43d9-a5ec-de471c3bdf2a-etc-cni-netd\") on node \"ci-4459-0-0-n-0223e12d7a\" DevicePath \"\"" Sep 16 04:30:14.010392 kubelet[2759]: I0916 04:30:14.009486 2759 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/22b20782-77b0-43d9-a5ec-de471c3bdf2a-host-proc-sys-kernel\") on node \"ci-4459-0-0-n-0223e12d7a\" DevicePath \"\"" Sep 16 04:30:14.010392 kubelet[2759]: I0916 04:30:14.009503 2759 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/22b20782-77b0-43d9-a5ec-de471c3bdf2a-cni-path\") on node \"ci-4459-0-0-n-0223e12d7a\" DevicePath \"\"" Sep 16 04:30:14.011447 kubelet[2759]: I0916 04:30:14.009519 2759 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e641ff89-8106-4546-9ca0-351528c24d00-cilium-config-path\") on node \"ci-4459-0-0-n-0223e12d7a\" DevicePath \"\"" Sep 16 04:30:14.011447 kubelet[2759]: I0916 04:30:14.009672 2759 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/22b20782-77b0-43d9-a5ec-de471c3bdf2a-cilium-run\") on node \"ci-4459-0-0-n-0223e12d7a\" DevicePath \"\"" Sep 16 04:30:14.011447 kubelet[2759]: I0916 04:30:14.010273 2759 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/22b20782-77b0-43d9-a5ec-de471c3bdf2a-hostproc\") on node \"ci-4459-0-0-n-0223e12d7a\" DevicePath \"\"" Sep 16 04:30:14.011447 kubelet[2759]: I0916 04:30:14.010315 2759 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22b20782-77b0-43d9-a5ec-de471c3bdf2a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "22b20782-77b0-43d9-a5ec-de471c3bdf2a" (UID: "22b20782-77b0-43d9-a5ec-de471c3bdf2a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 04:30:14.011447 kubelet[2759]: I0916 04:30:14.011227 2759 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22b20782-77b0-43d9-a5ec-de471c3bdf2a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "22b20782-77b0-43d9-a5ec-de471c3bdf2a" (UID: "22b20782-77b0-43d9-a5ec-de471c3bdf2a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 04:30:14.012674 kubelet[2759]: I0916 04:30:14.011298 2759 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22b20782-77b0-43d9-a5ec-de471c3bdf2a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "22b20782-77b0-43d9-a5ec-de471c3bdf2a" (UID: "22b20782-77b0-43d9-a5ec-de471c3bdf2a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 04:30:14.015449 containerd[1543]: time="2025-09-16T04:30:14.015393085Z" level=info msg="RemoveContainer for \"8eb60dd0af894d371d1d55cec279aaab375ada63989f51c128e6a50646d14442\" returns successfully" Sep 16 04:30:14.018258 kubelet[2759]: I0916 04:30:14.017939 2759 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22b20782-77b0-43d9-a5ec-de471c3bdf2a-kube-api-access-hhxjr" (OuterVolumeSpecName: "kube-api-access-hhxjr") pod "22b20782-77b0-43d9-a5ec-de471c3bdf2a" (UID: "22b20782-77b0-43d9-a5ec-de471c3bdf2a"). InnerVolumeSpecName "kube-api-access-hhxjr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 16 04:30:14.018258 kubelet[2759]: I0916 04:30:14.018114 2759 scope.go:117] "RemoveContainer" containerID="26c3ad314465a7e82e58bd9e4be70bda75dc8a216ce45af86f307ad8518dc083" Sep 16 04:30:14.019992 containerd[1543]: time="2025-09-16T04:30:14.019938916Z" level=info msg="RemoveContainer for \"26c3ad314465a7e82e58bd9e4be70bda75dc8a216ce45af86f307ad8518dc083\"" Sep 16 04:30:14.020338 kubelet[2759]: I0916 04:30:14.020307 2759 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22b20782-77b0-43d9-a5ec-de471c3bdf2a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "22b20782-77b0-43d9-a5ec-de471c3bdf2a" (UID: "22b20782-77b0-43d9-a5ec-de471c3bdf2a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 16 04:30:14.022166 kubelet[2759]: I0916 04:30:14.022119 2759 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22b20782-77b0-43d9-a5ec-de471c3bdf2a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "22b20782-77b0-43d9-a5ec-de471c3bdf2a" (UID: "22b20782-77b0-43d9-a5ec-de471c3bdf2a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 16 04:30:14.022397 kubelet[2759]: I0916 04:30:14.022362 2759 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22b20782-77b0-43d9-a5ec-de471c3bdf2a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "22b20782-77b0-43d9-a5ec-de471c3bdf2a" (UID: "22b20782-77b0-43d9-a5ec-de471c3bdf2a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 16 04:30:14.023719 containerd[1543]: time="2025-09-16T04:30:14.023671694Z" level=info msg="RemoveContainer for \"26c3ad314465a7e82e58bd9e4be70bda75dc8a216ce45af86f307ad8518dc083\" returns successfully" Sep 16 04:30:14.023966 kubelet[2759]: I0916 04:30:14.023935 2759 scope.go:117] "RemoveContainer" containerID="d7c2f361ed818f4929a50c5b3b1f2163971d0bf08e2168cfdb55b3bfc274fc1d" Sep 16 04:30:14.024406 containerd[1543]: time="2025-09-16T04:30:14.024372985Z" level=error msg="ContainerStatus for \"d7c2f361ed818f4929a50c5b3b1f2163971d0bf08e2168cfdb55b3bfc274fc1d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d7c2f361ed818f4929a50c5b3b1f2163971d0bf08e2168cfdb55b3bfc274fc1d\": not found" Sep 16 04:30:14.024623 kubelet[2759]: E0916 04:30:14.024600 2759 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d7c2f361ed818f4929a50c5b3b1f2163971d0bf08e2168cfdb55b3bfc274fc1d\": not found" containerID="d7c2f361ed818f4929a50c5b3b1f2163971d0bf08e2168cfdb55b3bfc274fc1d" Sep 16 04:30:14.024877 kubelet[2759]: I0916 04:30:14.024658 2759 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d7c2f361ed818f4929a50c5b3b1f2163971d0bf08e2168cfdb55b3bfc274fc1d"} err="failed to get container status \"d7c2f361ed818f4929a50c5b3b1f2163971d0bf08e2168cfdb55b3bfc274fc1d\": rpc error: code = NotFound desc = an error occurred when try to find container \"d7c2f361ed818f4929a50c5b3b1f2163971d0bf08e2168cfdb55b3bfc274fc1d\": not found" Sep 16 04:30:14.024877 kubelet[2759]: I0916 04:30:14.024680 2759 scope.go:117] "RemoveContainer" containerID="d1dd58914350a2f858c9fd1cf767c251b0edc2e94d63ce9e9bbb2b0fe07d95c9" Sep 16 04:30:14.024931 containerd[1543]: time="2025-09-16T04:30:14.024863193Z" level=error msg="ContainerStatus for \"d1dd58914350a2f858c9fd1cf767c251b0edc2e94d63ce9e9bbb2b0fe07d95c9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d1dd58914350a2f858c9fd1cf767c251b0edc2e94d63ce9e9bbb2b0fe07d95c9\": not found" Sep 16 04:30:14.025286 kubelet[2759]: E0916 04:30:14.024973 2759 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d1dd58914350a2f858c9fd1cf767c251b0edc2e94d63ce9e9bbb2b0fe07d95c9\": not found" containerID="d1dd58914350a2f858c9fd1cf767c251b0edc2e94d63ce9e9bbb2b0fe07d95c9" Sep 16 04:30:14.025286 kubelet[2759]: I0916 04:30:14.025013 2759 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d1dd58914350a2f858c9fd1cf767c251b0edc2e94d63ce9e9bbb2b0fe07d95c9"} err="failed to get container status \"d1dd58914350a2f858c9fd1cf767c251b0edc2e94d63ce9e9bbb2b0fe07d95c9\": rpc error: code = NotFound desc = an error occurred when try to find container \"d1dd58914350a2f858c9fd1cf767c251b0edc2e94d63ce9e9bbb2b0fe07d95c9\": not found" Sep 16 04:30:14.025286 kubelet[2759]: I0916 04:30:14.025032 2759 scope.go:117] "RemoveContainer" containerID="c7a2c37e12dedb3285087599d3770c2d0b64bbc0944a934c1e3dade51f085ea8" Sep 16 04:30:14.025413 containerd[1543]: time="2025-09-16T04:30:14.025217558Z" level=error msg="ContainerStatus for \"c7a2c37e12dedb3285087599d3770c2d0b64bbc0944a934c1e3dade51f085ea8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c7a2c37e12dedb3285087599d3770c2d0b64bbc0944a934c1e3dade51f085ea8\": not found" Sep 16 04:30:14.025650 kubelet[2759]: E0916 04:30:14.025528 2759 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c7a2c37e12dedb3285087599d3770c2d0b64bbc0944a934c1e3dade51f085ea8\": not found" containerID="c7a2c37e12dedb3285087599d3770c2d0b64bbc0944a934c1e3dade51f085ea8" Sep 16 04:30:14.025650 kubelet[2759]: I0916 04:30:14.025554 2759 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c7a2c37e12dedb3285087599d3770c2d0b64bbc0944a934c1e3dade51f085ea8"} err="failed to get container status \"c7a2c37e12dedb3285087599d3770c2d0b64bbc0944a934c1e3dade51f085ea8\": rpc error: code = NotFound desc = an error occurred when try to find container \"c7a2c37e12dedb3285087599d3770c2d0b64bbc0944a934c1e3dade51f085ea8\": not found" Sep 16 04:30:14.025650 kubelet[2759]: I0916 04:30:14.025569 2759 scope.go:117] "RemoveContainer" containerID="8eb60dd0af894d371d1d55cec279aaab375ada63989f51c128e6a50646d14442" Sep 16 04:30:14.025793 containerd[1543]: time="2025-09-16T04:30:14.025709966Z" level=error msg="ContainerStatus for \"8eb60dd0af894d371d1d55cec279aaab375ada63989f51c128e6a50646d14442\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8eb60dd0af894d371d1d55cec279aaab375ada63989f51c128e6a50646d14442\": not found" Sep 16 04:30:14.026015 kubelet[2759]: E0916 04:30:14.025987 2759 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8eb60dd0af894d371d1d55cec279aaab375ada63989f51c128e6a50646d14442\": not found" containerID="8eb60dd0af894d371d1d55cec279aaab375ada63989f51c128e6a50646d14442" Sep 16 04:30:14.026113 kubelet[2759]: I0916 04:30:14.026088 2759 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8eb60dd0af894d371d1d55cec279aaab375ada63989f51c128e6a50646d14442"} err="failed to get container status \"8eb60dd0af894d371d1d55cec279aaab375ada63989f51c128e6a50646d14442\": rpc error: code = NotFound desc = an error occurred when try to find container \"8eb60dd0af894d371d1d55cec279aaab375ada63989f51c128e6a50646d14442\": not found" Sep 16 04:30:14.026246 kubelet[2759]: I0916 04:30:14.026164 2759 scope.go:117] "RemoveContainer" containerID="26c3ad314465a7e82e58bd9e4be70bda75dc8a216ce45af86f307ad8518dc083" Sep 16 04:30:14.026417 containerd[1543]: time="2025-09-16T04:30:14.026388377Z" level=error msg="ContainerStatus for \"26c3ad314465a7e82e58bd9e4be70bda75dc8a216ce45af86f307ad8518dc083\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"26c3ad314465a7e82e58bd9e4be70bda75dc8a216ce45af86f307ad8518dc083\": not found" Sep 16 04:30:14.026559 kubelet[2759]: E0916 04:30:14.026530 2759 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"26c3ad314465a7e82e58bd9e4be70bda75dc8a216ce45af86f307ad8518dc083\": not found" containerID="26c3ad314465a7e82e58bd9e4be70bda75dc8a216ce45af86f307ad8518dc083" Sep 16 04:30:14.026694 kubelet[2759]: I0916 04:30:14.026673 2759 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"26c3ad314465a7e82e58bd9e4be70bda75dc8a216ce45af86f307ad8518dc083"} err="failed to get container status \"26c3ad314465a7e82e58bd9e4be70bda75dc8a216ce45af86f307ad8518dc083\": rpc error: code = NotFound desc = an error occurred when try to find container \"26c3ad314465a7e82e58bd9e4be70bda75dc8a216ce45af86f307ad8518dc083\": not found" Sep 16 04:30:14.111337 kubelet[2759]: I0916 04:30:14.111203 2759 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/22b20782-77b0-43d9-a5ec-de471c3bdf2a-bpf-maps\") on node \"ci-4459-0-0-n-0223e12d7a\" DevicePath \"\"" Sep 16 04:30:14.111337 kubelet[2759]: I0916 04:30:14.111237 2759 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/22b20782-77b0-43d9-a5ec-de471c3bdf2a-cilium-config-path\") on node \"ci-4459-0-0-n-0223e12d7a\" DevicePath \"\"" Sep 16 04:30:14.111337 kubelet[2759]: I0916 04:30:14.111253 2759 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hhxjr\" (UniqueName: \"kubernetes.io/projected/22b20782-77b0-43d9-a5ec-de471c3bdf2a-kube-api-access-hhxjr\") on node \"ci-4459-0-0-n-0223e12d7a\" DevicePath \"\"" Sep 16 04:30:14.111337 kubelet[2759]: I0916 04:30:14.111265 2759 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/22b20782-77b0-43d9-a5ec-de471c3bdf2a-cilium-cgroup\") on node \"ci-4459-0-0-n-0223e12d7a\" DevicePath \"\"" Sep 16 04:30:14.111337 kubelet[2759]: I0916 04:30:14.111292 2759 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22b20782-77b0-43d9-a5ec-de471c3bdf2a-lib-modules\") on node \"ci-4459-0-0-n-0223e12d7a\" DevicePath \"\"" Sep 16 04:30:14.111337 kubelet[2759]: I0916 04:30:14.111303 2759 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/22b20782-77b0-43d9-a5ec-de471c3bdf2a-hubble-tls\") on node \"ci-4459-0-0-n-0223e12d7a\" DevicePath \"\"" Sep 16 04:30:14.111337 kubelet[2759]: I0916 04:30:14.111313 2759 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/22b20782-77b0-43d9-a5ec-de471c3bdf2a-clustermesh-secrets\") on node \"ci-4459-0-0-n-0223e12d7a\" DevicePath \"\"" Sep 16 04:30:14.274291 systemd[1]: Removed slice kubepods-burstable-pod22b20782_77b0_43d9_a5ec_de471c3bdf2a.slice - libcontainer container kubepods-burstable-pod22b20782_77b0_43d9_a5ec_de471c3bdf2a.slice. Sep 16 04:30:14.274455 systemd[1]: kubepods-burstable-pod22b20782_77b0_43d9_a5ec_de471c3bdf2a.slice: Consumed 7.436s CPU time, 126.7M memory peak, 120K read from disk, 12.9M written to disk. Sep 16 04:30:14.753589 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6542c9d2c0678b2c4a55eb0926bec8cede83e45cd96ef0d523b2921c2c3d9062-shm.mount: Deactivated successfully. Sep 16 04:30:14.753716 systemd[1]: var-lib-kubelet-pods-22b20782\x2d77b0\x2d43d9\x2da5ec\x2dde471c3bdf2a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 16 04:30:14.753826 systemd[1]: var-lib-kubelet-pods-22b20782\x2d77b0\x2d43d9\x2da5ec\x2dde471c3bdf2a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 16 04:30:14.753894 systemd[1]: var-lib-kubelet-pods-e641ff89\x2d8106\x2d4546\x2d9ca0\x2d351528c24d00-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt4n2t.mount: Deactivated successfully. Sep 16 04:30:14.753953 systemd[1]: var-lib-kubelet-pods-22b20782\x2d77b0\x2d43d9\x2da5ec\x2dde471c3bdf2a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhhxjr.mount: Deactivated successfully. Sep 16 04:30:15.317786 kubelet[2759]: I0916 04:30:15.317312 2759 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22b20782-77b0-43d9-a5ec-de471c3bdf2a" path="/var/lib/kubelet/pods/22b20782-77b0-43d9-a5ec-de471c3bdf2a/volumes" Sep 16 04:30:15.318808 kubelet[2759]: I0916 04:30:15.318748 2759 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e641ff89-8106-4546-9ca0-351528c24d00" path="/var/lib/kubelet/pods/e641ff89-8106-4546-9ca0-351528c24d00/volumes" Sep 16 04:30:15.440043 kubelet[2759]: E0916 04:30:15.439980 2759 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 16 04:30:15.741124 sshd[4278]: Connection closed by 139.178.89.65 port 52648 Sep 16 04:30:15.741933 sshd-session[4275]: pam_unix(sshd:session): session closed for user core Sep 16 04:30:15.748799 systemd[1]: sshd@19-138.199.234.3:22-139.178.89.65:52648.service: Deactivated successfully. Sep 16 04:30:15.751415 systemd[1]: session-20.scope: Deactivated successfully. Sep 16 04:30:15.751889 systemd[1]: session-20.scope: Consumed 1.814s CPU time, 25.6M memory peak. Sep 16 04:30:15.753602 systemd-logind[1522]: Session 20 logged out. Waiting for processes to exit. Sep 16 04:30:15.756830 systemd-logind[1522]: Removed session 20. Sep 16 04:30:15.909234 systemd[1]: Started sshd@20-138.199.234.3:22-139.178.89.65:36300.service - OpenSSH per-connection server daemon (139.178.89.65:36300). Sep 16 04:30:16.899000 sshd[4430]: Accepted publickey for core from 139.178.89.65 port 36300 ssh2: RSA SHA256:hnZQROmedaG+reQAaWvmG41QCRiTlF3QrQA4Qzar5jk Sep 16 04:30:16.901201 sshd-session[4430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:30:16.907338 systemd-logind[1522]: New session 21 of user core. Sep 16 04:30:16.913099 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 16 04:30:18.314032 kubelet[2759]: I0916 04:30:18.313913 2759 memory_manager.go:355] "RemoveStaleState removing state" podUID="e641ff89-8106-4546-9ca0-351528c24d00" containerName="cilium-operator" Sep 16 04:30:18.314032 kubelet[2759]: I0916 04:30:18.313948 2759 memory_manager.go:355] "RemoveStaleState removing state" podUID="22b20782-77b0-43d9-a5ec-de471c3bdf2a" containerName="cilium-agent" Sep 16 04:30:18.324004 systemd[1]: Created slice kubepods-burstable-pod2239a31a_57f3_4584_b2f9_c2990f45589a.slice - libcontainer container kubepods-burstable-pod2239a31a_57f3_4584_b2f9_c2990f45589a.slice. Sep 16 04:30:18.439126 kubelet[2759]: I0916 04:30:18.438986 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2239a31a-57f3-4584-b2f9-c2990f45589a-cilium-run\") pod \"cilium-j9v6s\" (UID: \"2239a31a-57f3-4584-b2f9-c2990f45589a\") " pod="kube-system/cilium-j9v6s" Sep 16 04:30:18.439289 kubelet[2759]: I0916 04:30:18.439150 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2239a31a-57f3-4584-b2f9-c2990f45589a-bpf-maps\") pod \"cilium-j9v6s\" (UID: \"2239a31a-57f3-4584-b2f9-c2990f45589a\") " pod="kube-system/cilium-j9v6s" Sep 16 04:30:18.439289 kubelet[2759]: I0916 04:30:18.439240 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2239a31a-57f3-4584-b2f9-c2990f45589a-cni-path\") pod \"cilium-j9v6s\" (UID: \"2239a31a-57f3-4584-b2f9-c2990f45589a\") " pod="kube-system/cilium-j9v6s" Sep 16 04:30:18.439433 kubelet[2759]: I0916 04:30:18.439326 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2239a31a-57f3-4584-b2f9-c2990f45589a-lib-modules\") pod \"cilium-j9v6s\" (UID: \"2239a31a-57f3-4584-b2f9-c2990f45589a\") " pod="kube-system/cilium-j9v6s" Sep 16 04:30:18.439433 kubelet[2759]: I0916 04:30:18.439408 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2239a31a-57f3-4584-b2f9-c2990f45589a-xtables-lock\") pod \"cilium-j9v6s\" (UID: \"2239a31a-57f3-4584-b2f9-c2990f45589a\") " pod="kube-system/cilium-j9v6s" Sep 16 04:30:18.439523 kubelet[2759]: I0916 04:30:18.439479 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2239a31a-57f3-4584-b2f9-c2990f45589a-hubble-tls\") pod \"cilium-j9v6s\" (UID: \"2239a31a-57f3-4584-b2f9-c2990f45589a\") " pod="kube-system/cilium-j9v6s" Sep 16 04:30:18.439614 kubelet[2759]: I0916 04:30:18.439559 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2239a31a-57f3-4584-b2f9-c2990f45589a-cilium-cgroup\") pod \"cilium-j9v6s\" (UID: \"2239a31a-57f3-4584-b2f9-c2990f45589a\") " pod="kube-system/cilium-j9v6s" Sep 16 04:30:18.439687 kubelet[2759]: I0916 04:30:18.439641 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2239a31a-57f3-4584-b2f9-c2990f45589a-clustermesh-secrets\") pod \"cilium-j9v6s\" (UID: \"2239a31a-57f3-4584-b2f9-c2990f45589a\") " pod="kube-system/cilium-j9v6s" Sep 16 04:30:18.439741 kubelet[2759]: I0916 04:30:18.439680 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2239a31a-57f3-4584-b2f9-c2990f45589a-host-proc-sys-kernel\") pod \"cilium-j9v6s\" (UID: \"2239a31a-57f3-4584-b2f9-c2990f45589a\") " pod="kube-system/cilium-j9v6s" Sep 16 04:30:18.439881 kubelet[2759]: I0916 04:30:18.439811 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2239a31a-57f3-4584-b2f9-c2990f45589a-host-proc-sys-net\") pod \"cilium-j9v6s\" (UID: \"2239a31a-57f3-4584-b2f9-c2990f45589a\") " pod="kube-system/cilium-j9v6s" Sep 16 04:30:18.440015 kubelet[2759]: I0916 04:30:18.439895 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2239a31a-57f3-4584-b2f9-c2990f45589a-etc-cni-netd\") pod \"cilium-j9v6s\" (UID: \"2239a31a-57f3-4584-b2f9-c2990f45589a\") " pod="kube-system/cilium-j9v6s" Sep 16 04:30:18.440015 kubelet[2759]: I0916 04:30:18.439994 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnz85\" (UniqueName: \"kubernetes.io/projected/2239a31a-57f3-4584-b2f9-c2990f45589a-kube-api-access-gnz85\") pod \"cilium-j9v6s\" (UID: \"2239a31a-57f3-4584-b2f9-c2990f45589a\") " pod="kube-system/cilium-j9v6s" Sep 16 04:30:18.440181 kubelet[2759]: I0916 04:30:18.440141 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2239a31a-57f3-4584-b2f9-c2990f45589a-cilium-config-path\") pod \"cilium-j9v6s\" (UID: \"2239a31a-57f3-4584-b2f9-c2990f45589a\") " pod="kube-system/cilium-j9v6s" Sep 16 04:30:18.440276 kubelet[2759]: I0916 04:30:18.440240 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2239a31a-57f3-4584-b2f9-c2990f45589a-cilium-ipsec-secrets\") pod \"cilium-j9v6s\" (UID: \"2239a31a-57f3-4584-b2f9-c2990f45589a\") " pod="kube-system/cilium-j9v6s" Sep 16 04:30:18.440363 kubelet[2759]: I0916 04:30:18.440335 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2239a31a-57f3-4584-b2f9-c2990f45589a-hostproc\") pod \"cilium-j9v6s\" (UID: \"2239a31a-57f3-4584-b2f9-c2990f45589a\") " pod="kube-system/cilium-j9v6s" Sep 16 04:30:18.498800 sshd[4433]: Connection closed by 139.178.89.65 port 36300 Sep 16 04:30:18.499813 sshd-session[4430]: pam_unix(sshd:session): session closed for user core Sep 16 04:30:18.507117 systemd[1]: sshd@20-138.199.234.3:22-139.178.89.65:36300.service: Deactivated successfully. Sep 16 04:30:18.507309 systemd-logind[1522]: Session 21 logged out. Waiting for processes to exit. Sep 16 04:30:18.509695 systemd[1]: session-21.scope: Deactivated successfully. Sep 16 04:30:18.512177 systemd-logind[1522]: Removed session 21. Sep 16 04:30:18.628699 containerd[1543]: time="2025-09-16T04:30:18.628295848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j9v6s,Uid:2239a31a-57f3-4584-b2f9-c2990f45589a,Namespace:kube-system,Attempt:0,}" Sep 16 04:30:18.651748 containerd[1543]: time="2025-09-16T04:30:18.651696292Z" level=info msg="connecting to shim 30fac8faac2d0aef1f89935d17ac2e49882d9cb79ff7971d9529599e289f9384" address="unix:///run/containerd/s/01d040a5a9b7f9f8db38368a60fed82b9ec8c2e2ddff5d5a7d0dcf64537fe322" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:30:18.672222 systemd[1]: Started sshd@21-138.199.234.3:22-139.178.89.65:36310.service - OpenSSH per-connection server daemon (139.178.89.65:36310). Sep 16 04:30:18.691910 systemd[1]: Started cri-containerd-30fac8faac2d0aef1f89935d17ac2e49882d9cb79ff7971d9529599e289f9384.scope - libcontainer container 30fac8faac2d0aef1f89935d17ac2e49882d9cb79ff7971d9529599e289f9384. Sep 16 04:30:18.725100 containerd[1543]: time="2025-09-16T04:30:18.725056035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j9v6s,Uid:2239a31a-57f3-4584-b2f9-c2990f45589a,Namespace:kube-system,Attempt:0,} returns sandbox id \"30fac8faac2d0aef1f89935d17ac2e49882d9cb79ff7971d9529599e289f9384\"" Sep 16 04:30:18.732065 containerd[1543]: time="2025-09-16T04:30:18.732023504Z" level=info msg="CreateContainer within sandbox \"30fac8faac2d0aef1f89935d17ac2e49882d9cb79ff7971d9529599e289f9384\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 16 04:30:18.745421 containerd[1543]: time="2025-09-16T04:30:18.745381432Z" level=info msg="Container 23a647d0adaecd45c7c9cbec9bab87dbe95b9987b97060c2c8aece85d2cde3f2: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:30:18.753289 containerd[1543]: time="2025-09-16T04:30:18.753215474Z" level=info msg="CreateContainer within sandbox \"30fac8faac2d0aef1f89935d17ac2e49882d9cb79ff7971d9529599e289f9384\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"23a647d0adaecd45c7c9cbec9bab87dbe95b9987b97060c2c8aece85d2cde3f2\"" Sep 16 04:30:18.754640 containerd[1543]: time="2025-09-16T04:30:18.754616016Z" level=info msg="StartContainer for \"23a647d0adaecd45c7c9cbec9bab87dbe95b9987b97060c2c8aece85d2cde3f2\"" Sep 16 04:30:18.756396 containerd[1543]: time="2025-09-16T04:30:18.756133280Z" level=info msg="connecting to shim 23a647d0adaecd45c7c9cbec9bab87dbe95b9987b97060c2c8aece85d2cde3f2" address="unix:///run/containerd/s/01d040a5a9b7f9f8db38368a60fed82b9ec8c2e2ddff5d5a7d0dcf64537fe322" protocol=ttrpc version=3 Sep 16 04:30:18.780942 systemd[1]: Started cri-containerd-23a647d0adaecd45c7c9cbec9bab87dbe95b9987b97060c2c8aece85d2cde3f2.scope - libcontainer container 23a647d0adaecd45c7c9cbec9bab87dbe95b9987b97060c2c8aece85d2cde3f2. Sep 16 04:30:18.818044 containerd[1543]: time="2025-09-16T04:30:18.817988923Z" level=info msg="StartContainer for \"23a647d0adaecd45c7c9cbec9bab87dbe95b9987b97060c2c8aece85d2cde3f2\" returns successfully" Sep 16 04:30:18.830115 systemd[1]: cri-containerd-23a647d0adaecd45c7c9cbec9bab87dbe95b9987b97060c2c8aece85d2cde3f2.scope: Deactivated successfully. Sep 16 04:30:18.835318 containerd[1543]: time="2025-09-16T04:30:18.835110950Z" level=info msg="received exit event container_id:\"23a647d0adaecd45c7c9cbec9bab87dbe95b9987b97060c2c8aece85d2cde3f2\" id:\"23a647d0adaecd45c7c9cbec9bab87dbe95b9987b97060c2c8aece85d2cde3f2\" pid:4511 exited_at:{seconds:1757997018 nanos:834718504}" Sep 16 04:30:18.835318 containerd[1543]: time="2025-09-16T04:30:18.835291953Z" level=info msg="TaskExit event in podsandbox handler container_id:\"23a647d0adaecd45c7c9cbec9bab87dbe95b9987b97060c2c8aece85d2cde3f2\" id:\"23a647d0adaecd45c7c9cbec9bab87dbe95b9987b97060c2c8aece85d2cde3f2\" pid:4511 exited_at:{seconds:1757997018 nanos:834718504}" Sep 16 04:30:18.993095 containerd[1543]: time="2025-09-16T04:30:18.992450761Z" level=info msg="CreateContainer within sandbox \"30fac8faac2d0aef1f89935d17ac2e49882d9cb79ff7971d9529599e289f9384\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 16 04:30:19.004238 containerd[1543]: time="2025-09-16T04:30:19.004161664Z" level=info msg="Container e864eac501dfac8b540ca7ead335f799e9f74a20ed57fd34399b6ceb83ffe1aa: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:30:19.013218 containerd[1543]: time="2025-09-16T04:30:19.013132603Z" level=info msg="CreateContainer within sandbox \"30fac8faac2d0aef1f89935d17ac2e49882d9cb79ff7971d9529599e289f9384\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e864eac501dfac8b540ca7ead335f799e9f74a20ed57fd34399b6ceb83ffe1aa\"" Sep 16 04:30:19.015105 containerd[1543]: time="2025-09-16T04:30:19.015017353Z" level=info msg="StartContainer for \"e864eac501dfac8b540ca7ead335f799e9f74a20ed57fd34399b6ceb83ffe1aa\"" Sep 16 04:30:19.018066 containerd[1543]: time="2025-09-16T04:30:19.017595153Z" level=info msg="connecting to shim e864eac501dfac8b540ca7ead335f799e9f74a20ed57fd34399b6ceb83ffe1aa" address="unix:///run/containerd/s/01d040a5a9b7f9f8db38368a60fed82b9ec8c2e2ddff5d5a7d0dcf64537fe322" protocol=ttrpc version=3 Sep 16 04:30:19.042108 systemd[1]: Started cri-containerd-e864eac501dfac8b540ca7ead335f799e9f74a20ed57fd34399b6ceb83ffe1aa.scope - libcontainer container e864eac501dfac8b540ca7ead335f799e9f74a20ed57fd34399b6ceb83ffe1aa. Sep 16 04:30:19.079767 containerd[1543]: time="2025-09-16T04:30:19.079722400Z" level=info msg="StartContainer for \"e864eac501dfac8b540ca7ead335f799e9f74a20ed57fd34399b6ceb83ffe1aa\" returns successfully" Sep 16 04:30:19.089291 systemd[1]: cri-containerd-e864eac501dfac8b540ca7ead335f799e9f74a20ed57fd34399b6ceb83ffe1aa.scope: Deactivated successfully. Sep 16 04:30:19.092945 containerd[1543]: time="2025-09-16T04:30:19.092881685Z" level=info msg="received exit event container_id:\"e864eac501dfac8b540ca7ead335f799e9f74a20ed57fd34399b6ceb83ffe1aa\" id:\"e864eac501dfac8b540ca7ead335f799e9f74a20ed57fd34399b6ceb83ffe1aa\" pid:4557 exited_at:{seconds:1757997019 nanos:92256315}" Sep 16 04:30:19.093249 containerd[1543]: time="2025-09-16T04:30:19.093088288Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e864eac501dfac8b540ca7ead335f799e9f74a20ed57fd34399b6ceb83ffe1aa\" id:\"e864eac501dfac8b540ca7ead335f799e9f74a20ed57fd34399b6ceb83ffe1aa\" pid:4557 exited_at:{seconds:1757997019 nanos:92256315}" Sep 16 04:30:19.298214 kubelet[2759]: I0916 04:30:19.296641 2759 setters.go:602] "Node became not ready" node="ci-4459-0-0-n-0223e12d7a" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-16T04:30:19Z","lastTransitionTime":"2025-09-16T04:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 16 04:30:19.672442 sshd[4475]: Accepted publickey for core from 139.178.89.65 port 36310 ssh2: RSA SHA256:hnZQROmedaG+reQAaWvmG41QCRiTlF3QrQA4Qzar5jk Sep 16 04:30:19.672866 sshd-session[4475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:30:19.679300 systemd-logind[1522]: New session 22 of user core. Sep 16 04:30:19.686067 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 16 04:30:19.994434 containerd[1543]: time="2025-09-16T04:30:19.994399435Z" level=info msg="CreateContainer within sandbox \"30fac8faac2d0aef1f89935d17ac2e49882d9cb79ff7971d9529599e289f9384\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 16 04:30:20.012783 containerd[1543]: time="2025-09-16T04:30:20.012703480Z" level=info msg="Container 34e6ee8ceb7dc580c95211e5e345a24a4a1ba5174e31cdc076028f7eac953c63: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:30:20.026183 containerd[1543]: time="2025-09-16T04:30:20.025681962Z" level=info msg="CreateContainer within sandbox \"30fac8faac2d0aef1f89935d17ac2e49882d9cb79ff7971d9529599e289f9384\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"34e6ee8ceb7dc580c95211e5e345a24a4a1ba5174e31cdc076028f7eac953c63\"" Sep 16 04:30:20.028108 containerd[1543]: time="2025-09-16T04:30:20.028039838Z" level=info msg="StartContainer for \"34e6ee8ceb7dc580c95211e5e345a24a4a1ba5174e31cdc076028f7eac953c63\"" Sep 16 04:30:20.034355 containerd[1543]: time="2025-09-16T04:30:20.034290455Z" level=info msg="connecting to shim 34e6ee8ceb7dc580c95211e5e345a24a4a1ba5174e31cdc076028f7eac953c63" address="unix:///run/containerd/s/01d040a5a9b7f9f8db38368a60fed82b9ec8c2e2ddff5d5a7d0dcf64537fe322" protocol=ttrpc version=3 Sep 16 04:30:20.066060 systemd[1]: Started cri-containerd-34e6ee8ceb7dc580c95211e5e345a24a4a1ba5174e31cdc076028f7eac953c63.scope - libcontainer container 34e6ee8ceb7dc580c95211e5e345a24a4a1ba5174e31cdc076028f7eac953c63. Sep 16 04:30:20.115292 systemd[1]: cri-containerd-34e6ee8ceb7dc580c95211e5e345a24a4a1ba5174e31cdc076028f7eac953c63.scope: Deactivated successfully. Sep 16 04:30:20.118068 containerd[1543]: time="2025-09-16T04:30:20.117740393Z" level=info msg="received exit event container_id:\"34e6ee8ceb7dc580c95211e5e345a24a4a1ba5174e31cdc076028f7eac953c63\" id:\"34e6ee8ceb7dc580c95211e5e345a24a4a1ba5174e31cdc076028f7eac953c63\" pid:4601 exited_at:{seconds:1757997020 nanos:116170728}" Sep 16 04:30:20.118537 containerd[1543]: time="2025-09-16T04:30:20.118478284Z" level=info msg="TaskExit event in podsandbox handler container_id:\"34e6ee8ceb7dc580c95211e5e345a24a4a1ba5174e31cdc076028f7eac953c63\" id:\"34e6ee8ceb7dc580c95211e5e345a24a4a1ba5174e31cdc076028f7eac953c63\" pid:4601 exited_at:{seconds:1757997020 nanos:116170728}" Sep 16 04:30:20.118921 containerd[1543]: time="2025-09-16T04:30:20.118902171Z" level=info msg="StartContainer for \"34e6ee8ceb7dc580c95211e5e345a24a4a1ba5174e31cdc076028f7eac953c63\" returns successfully" Sep 16 04:30:20.351937 sshd[4588]: Connection closed by 139.178.89.65 port 36310 Sep 16 04:30:20.351662 sshd-session[4475]: pam_unix(sshd:session): session closed for user core Sep 16 04:30:20.359704 systemd[1]: sshd@21-138.199.234.3:22-139.178.89.65:36310.service: Deactivated successfully. Sep 16 04:30:20.364462 systemd[1]: session-22.scope: Deactivated successfully. Sep 16 04:30:20.366612 systemd-logind[1522]: Session 22 logged out. Waiting for processes to exit. Sep 16 04:30:20.368291 systemd-logind[1522]: Removed session 22. Sep 16 04:30:20.442180 kubelet[2759]: E0916 04:30:20.442125 2759 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 16 04:30:20.528007 systemd[1]: Started sshd@22-138.199.234.3:22-139.178.89.65:34630.service - OpenSSH per-connection server daemon (139.178.89.65:34630). Sep 16 04:30:20.550278 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34e6ee8ceb7dc580c95211e5e345a24a4a1ba5174e31cdc076028f7eac953c63-rootfs.mount: Deactivated successfully. Sep 16 04:30:21.002512 containerd[1543]: time="2025-09-16T04:30:21.001791217Z" level=info msg="CreateContainer within sandbox \"30fac8faac2d0aef1f89935d17ac2e49882d9cb79ff7971d9529599e289f9384\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 16 04:30:21.014779 containerd[1543]: time="2025-09-16T04:30:21.010581833Z" level=info msg="Container 3720316e08c77cd14bdda11decbeeb47cfa01537a6bd81cd6cf8767998b5fdab: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:30:21.022009 containerd[1543]: time="2025-09-16T04:30:21.021847368Z" level=info msg="CreateContainer within sandbox \"30fac8faac2d0aef1f89935d17ac2e49882d9cb79ff7971d9529599e289f9384\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3720316e08c77cd14bdda11decbeeb47cfa01537a6bd81cd6cf8767998b5fdab\"" Sep 16 04:30:21.025520 containerd[1543]: time="2025-09-16T04:30:21.025486185Z" level=info msg="StartContainer for \"3720316e08c77cd14bdda11decbeeb47cfa01537a6bd81cd6cf8767998b5fdab\"" Sep 16 04:30:21.026850 containerd[1543]: time="2025-09-16T04:30:21.026771685Z" level=info msg="connecting to shim 3720316e08c77cd14bdda11decbeeb47cfa01537a6bd81cd6cf8767998b5fdab" address="unix:///run/containerd/s/01d040a5a9b7f9f8db38368a60fed82b9ec8c2e2ddff5d5a7d0dcf64537fe322" protocol=ttrpc version=3 Sep 16 04:30:21.053231 systemd[1]: Started cri-containerd-3720316e08c77cd14bdda11decbeeb47cfa01537a6bd81cd6cf8767998b5fdab.scope - libcontainer container 3720316e08c77cd14bdda11decbeeb47cfa01537a6bd81cd6cf8767998b5fdab. Sep 16 04:30:21.082068 systemd[1]: cri-containerd-3720316e08c77cd14bdda11decbeeb47cfa01537a6bd81cd6cf8767998b5fdab.scope: Deactivated successfully. Sep 16 04:30:21.085101 containerd[1543]: time="2025-09-16T04:30:21.084911588Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3720316e08c77cd14bdda11decbeeb47cfa01537a6bd81cd6cf8767998b5fdab\" id:\"3720316e08c77cd14bdda11decbeeb47cfa01537a6bd81cd6cf8767998b5fdab\" pid:4650 exited_at:{seconds:1757997021 nanos:84637863}" Sep 16 04:30:21.086993 containerd[1543]: time="2025-09-16T04:30:21.086932179Z" level=info msg="received exit event container_id:\"3720316e08c77cd14bdda11decbeeb47cfa01537a6bd81cd6cf8767998b5fdab\" id:\"3720316e08c77cd14bdda11decbeeb47cfa01537a6bd81cd6cf8767998b5fdab\" pid:4650 exited_at:{seconds:1757997021 nanos:84637863}" Sep 16 04:30:21.089235 containerd[1543]: time="2025-09-16T04:30:21.089134453Z" level=info msg="StartContainer for \"3720316e08c77cd14bdda11decbeeb47cfa01537a6bd81cd6cf8767998b5fdab\" returns successfully" Sep 16 04:30:21.110074 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3720316e08c77cd14bdda11decbeeb47cfa01537a6bd81cd6cf8767998b5fdab-rootfs.mount: Deactivated successfully. Sep 16 04:30:21.545396 sshd[4635]: Accepted publickey for core from 139.178.89.65 port 34630 ssh2: RSA SHA256:hnZQROmedaG+reQAaWvmG41QCRiTlF3QrQA4Qzar5jk Sep 16 04:30:21.548287 sshd-session[4635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:30:21.555191 systemd-logind[1522]: New session 23 of user core. Sep 16 04:30:21.562069 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 16 04:30:22.013043 containerd[1543]: time="2025-09-16T04:30:22.012362231Z" level=info msg="CreateContainer within sandbox \"30fac8faac2d0aef1f89935d17ac2e49882d9cb79ff7971d9529599e289f9384\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 16 04:30:22.030994 containerd[1543]: time="2025-09-16T04:30:22.030900999Z" level=info msg="Container 33649c78f31557c0b7e3bc44f917c020b0b93d6c4431f029796443fd9ba17cfc: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:30:22.042066 containerd[1543]: time="2025-09-16T04:30:22.041999251Z" level=info msg="CreateContainer within sandbox \"30fac8faac2d0aef1f89935d17ac2e49882d9cb79ff7971d9529599e289f9384\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"33649c78f31557c0b7e3bc44f917c020b0b93d6c4431f029796443fd9ba17cfc\"" Sep 16 04:30:22.044011 containerd[1543]: time="2025-09-16T04:30:22.043943441Z" level=info msg="StartContainer for \"33649c78f31557c0b7e3bc44f917c020b0b93d6c4431f029796443fd9ba17cfc\"" Sep 16 04:30:22.046699 containerd[1543]: time="2025-09-16T04:30:22.046651603Z" level=info msg="connecting to shim 33649c78f31557c0b7e3bc44f917c020b0b93d6c4431f029796443fd9ba17cfc" address="unix:///run/containerd/s/01d040a5a9b7f9f8db38368a60fed82b9ec8c2e2ddff5d5a7d0dcf64537fe322" protocol=ttrpc version=3 Sep 16 04:30:22.072236 systemd[1]: Started cri-containerd-33649c78f31557c0b7e3bc44f917c020b0b93d6c4431f029796443fd9ba17cfc.scope - libcontainer container 33649c78f31557c0b7e3bc44f917c020b0b93d6c4431f029796443fd9ba17cfc. Sep 16 04:30:22.121321 containerd[1543]: time="2025-09-16T04:30:22.121262241Z" level=info msg="StartContainer for \"33649c78f31557c0b7e3bc44f917c020b0b93d6c4431f029796443fd9ba17cfc\" returns successfully" Sep 16 04:30:22.222277 containerd[1543]: time="2025-09-16T04:30:22.222203967Z" level=info msg="TaskExit event in podsandbox handler container_id:\"33649c78f31557c0b7e3bc44f917c020b0b93d6c4431f029796443fd9ba17cfc\" id:\"55ec9c0a08a6b1e7774e4e66ec010fa4f816e53567903047edd00168eced07f8\" pid:4725 exited_at:{seconds:1757997022 nanos:221710399}" Sep 16 04:30:22.449815 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 16 04:30:23.040928 kubelet[2759]: I0916 04:30:23.040812 2759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-j9v6s" podStartSLOduration=5.040787866 podStartE2EDuration="5.040787866s" podCreationTimestamp="2025-09-16 04:30:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:30:23.040347299 +0000 UTC m=+207.859465396" watchObservedRunningTime="2025-09-16 04:30:23.040787866 +0000 UTC m=+207.859905963" Sep 16 04:30:23.314719 kubelet[2759]: E0916 04:30:23.313123 2759 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-mv2gp" podUID="3d36bfd1-a8fe-48fb-8e64-b83a960860b0" Sep 16 04:30:24.333982 containerd[1543]: time="2025-09-16T04:30:24.333841661Z" level=info msg="TaskExit event in podsandbox handler container_id:\"33649c78f31557c0b7e3bc44f917c020b0b93d6c4431f029796443fd9ba17cfc\" id:\"feb75c5a4b9a986fcf03576e102740072a600b78bef6c940da35fbe2e518e11a\" pid:4885 exit_status:1 exited_at:{seconds:1757997024 nanos:333281452}" Sep 16 04:30:25.316006 kubelet[2759]: E0916 04:30:25.314459 2759 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-mv2gp" podUID="3d36bfd1-a8fe-48fb-8e64-b83a960860b0" Sep 16 04:30:25.485591 systemd-networkd[1409]: lxc_health: Link UP Sep 16 04:30:25.502633 systemd-networkd[1409]: lxc_health: Gained carrier Sep 16 04:30:26.517133 containerd[1543]: time="2025-09-16T04:30:26.517062832Z" level=info msg="TaskExit event in podsandbox handler container_id:\"33649c78f31557c0b7e3bc44f917c020b0b93d6c4431f029796443fd9ba17cfc\" id:\"977442178bdc46a4fa03b3bab2e2ff5c7931afff2cf41319e34af151663866d6\" pid:5251 exited_at:{seconds:1757997026 nanos:516111137}" Sep 16 04:30:27.066989 systemd-networkd[1409]: lxc_health: Gained IPv6LL Sep 16 04:30:28.699718 containerd[1543]: time="2025-09-16T04:30:28.699567800Z" level=info msg="TaskExit event in podsandbox handler container_id:\"33649c78f31557c0b7e3bc44f917c020b0b93d6c4431f029796443fd9ba17cfc\" id:\"b9f5712c60e89337241b7cbcb70368f1186f2ae1d33d665a91d8c0a5ec7ac4d3\" pid:5283 exited_at:{seconds:1757997028 nanos:699286476}" Sep 16 04:30:30.835545 containerd[1543]: time="2025-09-16T04:30:30.835495504Z" level=info msg="TaskExit event in podsandbox handler container_id:\"33649c78f31557c0b7e3bc44f917c020b0b93d6c4431f029796443fd9ba17cfc\" id:\"be9c49b5cdfa5ae698575daeebaf4ff07e309fad20f520ed3360db4966769cd6\" pid:5311 exited_at:{seconds:1757997030 nanos:834786813}" Sep 16 04:30:32.978262 containerd[1543]: time="2025-09-16T04:30:32.978039285Z" level=info msg="TaskExit event in podsandbox handler container_id:\"33649c78f31557c0b7e3bc44f917c020b0b93d6c4431f029796443fd9ba17cfc\" id:\"4c9b49dbc9ee012fc6ca300d24c374178ed63afac3c863b05cfc9edcd32ecec2\" pid:5333 exited_at:{seconds:1757997032 nanos:977352955}" Sep 16 04:30:33.194873 sshd[4675]: Connection closed by 139.178.89.65 port 34630 Sep 16 04:30:33.195405 sshd-session[4635]: pam_unix(sshd:session): session closed for user core Sep 16 04:30:33.201875 systemd[1]: sshd@22-138.199.234.3:22-139.178.89.65:34630.service: Deactivated successfully. Sep 16 04:30:33.204477 systemd[1]: session-23.scope: Deactivated successfully. Sep 16 04:30:33.207503 systemd-logind[1522]: Session 23 logged out. Waiting for processes to exit. Sep 16 04:30:33.208517 systemd-logind[1522]: Removed session 23. Sep 16 04:30:50.398997 systemd[1]: Started sshd@23-138.199.234.3:22-196.251.114.29:51824.service - OpenSSH per-connection server daemon (196.251.114.29:51824). Sep 16 04:30:50.447197 sshd[5354]: Connection closed by 196.251.114.29 port 51824 Sep 16 04:30:50.450199 systemd[1]: sshd@23-138.199.234.3:22-196.251.114.29:51824.service: Deactivated successfully. Sep 16 04:30:55.302563 containerd[1543]: time="2025-09-16T04:30:55.302493969Z" level=info msg="StopPodSandbox for \"4791e40dca24d0b745a47ee23bf4e9c91c20d768ee592161dcc135ab82e3d4c8\"" Sep 16 04:30:55.303621 containerd[1543]: time="2025-09-16T04:30:55.303301661Z" level=info msg="TearDown network for sandbox \"4791e40dca24d0b745a47ee23bf4e9c91c20d768ee592161dcc135ab82e3d4c8\" successfully" Sep 16 04:30:55.303621 containerd[1543]: time="2025-09-16T04:30:55.303339022Z" level=info msg="StopPodSandbox for \"4791e40dca24d0b745a47ee23bf4e9c91c20d768ee592161dcc135ab82e3d4c8\" returns successfully" Sep 16 04:30:55.304044 containerd[1543]: time="2025-09-16T04:30:55.304007672Z" level=info msg="RemovePodSandbox for \"4791e40dca24d0b745a47ee23bf4e9c91c20d768ee592161dcc135ab82e3d4c8\"" Sep 16 04:30:55.304115 containerd[1543]: time="2025-09-16T04:30:55.304058113Z" level=info msg="Forcibly stopping sandbox \"4791e40dca24d0b745a47ee23bf4e9c91c20d768ee592161dcc135ab82e3d4c8\"" Sep 16 04:30:55.304210 containerd[1543]: time="2025-09-16T04:30:55.304184075Z" level=info msg="TearDown network for sandbox \"4791e40dca24d0b745a47ee23bf4e9c91c20d768ee592161dcc135ab82e3d4c8\" successfully" Sep 16 04:30:55.306526 containerd[1543]: time="2025-09-16T04:30:55.306488830Z" level=info msg="Ensure that sandbox 4791e40dca24d0b745a47ee23bf4e9c91c20d768ee592161dcc135ab82e3d4c8 in task-service has been cleanup successfully" Sep 16 04:30:55.310345 containerd[1543]: time="2025-09-16T04:30:55.310301927Z" level=info msg="RemovePodSandbox \"4791e40dca24d0b745a47ee23bf4e9c91c20d768ee592161dcc135ab82e3d4c8\" returns successfully" Sep 16 04:30:55.310997 containerd[1543]: time="2025-09-16T04:30:55.310920336Z" level=info msg="StopPodSandbox for \"6542c9d2c0678b2c4a55eb0926bec8cede83e45cd96ef0d523b2921c2c3d9062\"" Sep 16 04:30:55.311257 containerd[1543]: time="2025-09-16T04:30:55.311220821Z" level=info msg="TearDown network for sandbox \"6542c9d2c0678b2c4a55eb0926bec8cede83e45cd96ef0d523b2921c2c3d9062\" successfully" Sep 16 04:30:55.311257 containerd[1543]: time="2025-09-16T04:30:55.311240061Z" level=info msg="StopPodSandbox for \"6542c9d2c0678b2c4a55eb0926bec8cede83e45cd96ef0d523b2921c2c3d9062\" returns successfully" Sep 16 04:30:55.311942 containerd[1543]: time="2025-09-16T04:30:55.311826270Z" level=info msg="RemovePodSandbox for \"6542c9d2c0678b2c4a55eb0926bec8cede83e45cd96ef0d523b2921c2c3d9062\"" Sep 16 04:30:55.312100 containerd[1543]: time="2025-09-16T04:30:55.311877791Z" level=info msg="Forcibly stopping sandbox \"6542c9d2c0678b2c4a55eb0926bec8cede83e45cd96ef0d523b2921c2c3d9062\"" Sep 16 04:30:55.312189 containerd[1543]: time="2025-09-16T04:30:55.312173235Z" level=info msg="TearDown network for sandbox \"6542c9d2c0678b2c4a55eb0926bec8cede83e45cd96ef0d523b2921c2c3d9062\" successfully" Sep 16 04:30:55.317199 containerd[1543]: time="2025-09-16T04:30:55.317149831Z" level=info msg="Ensure that sandbox 6542c9d2c0678b2c4a55eb0926bec8cede83e45cd96ef0d523b2921c2c3d9062 in task-service has been cleanup successfully" Sep 16 04:30:55.324876 containerd[1543]: time="2025-09-16T04:30:55.324842227Z" level=info msg="RemovePodSandbox \"6542c9d2c0678b2c4a55eb0926bec8cede83e45cd96ef0d523b2921c2c3d9062\" returns successfully" Sep 16 04:31:05.406719 kubelet[2759]: E0916 04:31:05.406665 2759 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:53976->10.0.0.2:2379: read: connection timed out" Sep 16 04:31:05.413429 containerd[1543]: time="2025-09-16T04:31:05.413300587Z" level=info msg="received exit event container_id:\"f74fa7b180df69b6b60f88fb2f39ff028f534ca5febeb2d7cf3301d330702b55\" id:\"f74fa7b180df69b6b60f88fb2f39ff028f534ca5febeb2d7cf3301d330702b55\" pid:2633 exit_status:1 exited_at:{seconds:1757997065 nanos:412850220}" Sep 16 04:31:05.413429 containerd[1543]: time="2025-09-16T04:31:05.413394828Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f74fa7b180df69b6b60f88fb2f39ff028f534ca5febeb2d7cf3301d330702b55\" id:\"f74fa7b180df69b6b60f88fb2f39ff028f534ca5febeb2d7cf3301d330702b55\" pid:2633 exit_status:1 exited_at:{seconds:1757997065 nanos:412850220}" Sep 16 04:31:05.413450 systemd[1]: cri-containerd-f74fa7b180df69b6b60f88fb2f39ff028f534ca5febeb2d7cf3301d330702b55.scope: Deactivated successfully. Sep 16 04:31:05.413901 systemd[1]: cri-containerd-f74fa7b180df69b6b60f88fb2f39ff028f534ca5febeb2d7cf3301d330702b55.scope: Consumed 4.863s CPU time, 22.4M memory peak. Sep 16 04:31:05.444172 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f74fa7b180df69b6b60f88fb2f39ff028f534ca5febeb2d7cf3301d330702b55-rootfs.mount: Deactivated successfully. Sep 16 04:31:05.722580 systemd[1]: cri-containerd-ddd38f9c371834bb52febe0b3f6968785b5bf98d043071fe31d913f337aab433.scope: Deactivated successfully. Sep 16 04:31:05.723367 systemd[1]: cri-containerd-ddd38f9c371834bb52febe0b3f6968785b5bf98d043071fe31d913f337aab433.scope: Consumed 4.519s CPU time, 55.4M memory peak. Sep 16 04:31:05.726958 containerd[1543]: time="2025-09-16T04:31:05.726914416Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ddd38f9c371834bb52febe0b3f6968785b5bf98d043071fe31d913f337aab433\" id:\"ddd38f9c371834bb52febe0b3f6968785b5bf98d043071fe31d913f337aab433\" pid:2602 exit_status:1 exited_at:{seconds:1757997065 nanos:726385408}" Sep 16 04:31:05.727197 containerd[1543]: time="2025-09-16T04:31:05.727010897Z" level=info msg="received exit event container_id:\"ddd38f9c371834bb52febe0b3f6968785b5bf98d043071fe31d913f337aab433\" id:\"ddd38f9c371834bb52febe0b3f6968785b5bf98d043071fe31d913f337aab433\" pid:2602 exit_status:1 exited_at:{seconds:1757997065 nanos:726385408}" Sep 16 04:31:05.750335 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ddd38f9c371834bb52febe0b3f6968785b5bf98d043071fe31d913f337aab433-rootfs.mount: Deactivated successfully. Sep 16 04:31:06.139961 kubelet[2759]: I0916 04:31:06.139899 2759 scope.go:117] "RemoveContainer" containerID="f74fa7b180df69b6b60f88fb2f39ff028f534ca5febeb2d7cf3301d330702b55" Sep 16 04:31:06.141719 containerd[1543]: time="2025-09-16T04:31:06.141680602Z" level=info msg="CreateContainer within sandbox \"b51acdd0dff02191d732dd9b6f680847f9ffa75898555e8085688b10b86676eb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 16 04:31:06.144804 kubelet[2759]: I0916 04:31:06.144561 2759 scope.go:117] "RemoveContainer" containerID="ddd38f9c371834bb52febe0b3f6968785b5bf98d043071fe31d913f337aab433" Sep 16 04:31:06.146709 containerd[1543]: time="2025-09-16T04:31:06.146651557Z" level=info msg="CreateContainer within sandbox \"02051610f4ac87a2e2efbd4b2636d0cbb0b8e16079321eb3ce0d0fff8eb3dade\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 16 04:31:06.159786 containerd[1543]: time="2025-09-16T04:31:06.158156170Z" level=info msg="Container 51c1a09efa68eacd8e9e793814d8bd0e6e85d1e42754a68091dee3db3987f4b0: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:31:06.160671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1139343180.mount: Deactivated successfully. Sep 16 04:31:06.168403 containerd[1543]: time="2025-09-16T04:31:06.168347603Z" level=info msg="Container cbaa16118a77d4e7dca665e72bb7cc3cb294089ee32bff4deaaaab8aa1f93b6a: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:31:06.173900 containerd[1543]: time="2025-09-16T04:31:06.173845805Z" level=info msg="CreateContainer within sandbox \"b51acdd0dff02191d732dd9b6f680847f9ffa75898555e8085688b10b86676eb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"51c1a09efa68eacd8e9e793814d8bd0e6e85d1e42754a68091dee3db3987f4b0\"" Sep 16 04:31:06.174729 containerd[1543]: time="2025-09-16T04:31:06.174701538Z" level=info msg="StartContainer for \"51c1a09efa68eacd8e9e793814d8bd0e6e85d1e42754a68091dee3db3987f4b0\"" Sep 16 04:31:06.178310 containerd[1543]: time="2025-09-16T04:31:06.178234391Z" level=info msg="connecting to shim 51c1a09efa68eacd8e9e793814d8bd0e6e85d1e42754a68091dee3db3987f4b0" address="unix:///run/containerd/s/83b12eda4eeb5c0dfc6a980fb8982090edb9b3d0e2c37b79f3eeebfce4ff008a" protocol=ttrpc version=3 Sep 16 04:31:06.178645 containerd[1543]: time="2025-09-16T04:31:06.178592636Z" level=info msg="CreateContainer within sandbox \"02051610f4ac87a2e2efbd4b2636d0cbb0b8e16079321eb3ce0d0fff8eb3dade\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"cbaa16118a77d4e7dca665e72bb7cc3cb294089ee32bff4deaaaab8aa1f93b6a\"" Sep 16 04:31:06.179971 containerd[1543]: time="2025-09-16T04:31:06.179927696Z" level=info msg="StartContainer for \"cbaa16118a77d4e7dca665e72bb7cc3cb294089ee32bff4deaaaab8aa1f93b6a\"" Sep 16 04:31:06.185711 containerd[1543]: time="2025-09-16T04:31:06.185602701Z" level=info msg="connecting to shim cbaa16118a77d4e7dca665e72bb7cc3cb294089ee32bff4deaaaab8aa1f93b6a" address="unix:///run/containerd/s/147718f68acdbea20fdc0cbb02a59993fc0afee35572d88f02dd2d8c4f17fe65" protocol=ttrpc version=3 Sep 16 04:31:06.202957 systemd[1]: Started cri-containerd-51c1a09efa68eacd8e9e793814d8bd0e6e85d1e42754a68091dee3db3987f4b0.scope - libcontainer container 51c1a09efa68eacd8e9e793814d8bd0e6e85d1e42754a68091dee3db3987f4b0. Sep 16 04:31:06.212076 systemd[1]: Started cri-containerd-cbaa16118a77d4e7dca665e72bb7cc3cb294089ee32bff4deaaaab8aa1f93b6a.scope - libcontainer container cbaa16118a77d4e7dca665e72bb7cc3cb294089ee32bff4deaaaab8aa1f93b6a. Sep 16 04:31:06.258025 containerd[1543]: time="2025-09-16T04:31:06.257949307Z" level=info msg="StartContainer for \"51c1a09efa68eacd8e9e793814d8bd0e6e85d1e42754a68091dee3db3987f4b0\" returns successfully" Sep 16 04:31:06.271956 containerd[1543]: time="2025-09-16T04:31:06.271809475Z" level=info msg="StartContainer for \"cbaa16118a77d4e7dca665e72bb7cc3cb294089ee32bff4deaaaab8aa1f93b6a\" returns successfully" Sep 16 04:31:08.605408 kubelet[2759]: E0916 04:31:08.605159 2759 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:53768->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4459-0-0-n-0223e12d7a.1865a902983d42a2 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4459-0-0-n-0223e12d7a,UID:f5ed49a9e0a52c7da4f704e90d0d6872,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4459-0-0-n-0223e12d7a,},FirstTimestamp:2025-09-16 04:30:58.167227042 +0000 UTC m=+242.986345099,LastTimestamp:2025-09-16 04:30:58.167227042 +0000 UTC m=+242.986345099,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-0-0-n-0223e12d7a,}"