Sep 4 23:44:59.873930 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 4 23:44:59.874027 kernel: Linux version 6.6.103-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Thu Sep 4 22:21:25 -00 2025 Sep 4 23:44:59.874039 kernel: KASLR enabled Sep 4 23:44:59.874045 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Sep 4 23:44:59.874051 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390bb018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Sep 4 23:44:59.874056 kernel: random: crng init done Sep 4 23:44:59.874063 kernel: secureboot: Secure boot disabled Sep 4 23:44:59.874069 kernel: ACPI: Early table checksum verification disabled Sep 4 23:44:59.874075 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Sep 4 23:44:59.874083 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Sep 4 23:44:59.874089 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:44:59.874095 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:44:59.874101 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:44:59.874107 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:44:59.874114 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:44:59.874122 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:44:59.874128 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:44:59.874134 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:44:59.874140 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:44:59.874147 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Sep 4 23:44:59.874153 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Sep 4 23:44:59.874159 kernel: NUMA: Failed to initialise from firmware Sep 4 23:44:59.874165 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Sep 4 23:44:59.874171 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Sep 4 23:44:59.874177 kernel: Zone ranges: Sep 4 23:44:59.874185 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 4 23:44:59.874191 kernel: DMA32 empty Sep 4 23:44:59.874197 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Sep 4 23:44:59.874203 kernel: Movable zone start for each node Sep 4 23:44:59.874210 kernel: Early memory node ranges Sep 4 23:44:59.874216 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Sep 4 23:44:59.874222 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Sep 4 23:44:59.874228 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Sep 4 23:44:59.874234 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Sep 4 23:44:59.874240 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Sep 4 23:44:59.874246 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Sep 4 23:44:59.874252 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Sep 4 23:44:59.874260 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Sep 4 23:44:59.874266 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Sep 4 23:44:59.874272 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Sep 4 23:44:59.874281 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Sep 4 23:44:59.874288 kernel: psci: probing for conduit method from ACPI. Sep 4 23:44:59.874294 kernel: psci: PSCIv1.1 detected in firmware. Sep 4 23:44:59.874302 kernel: psci: Using standard PSCI v0.2 function IDs Sep 4 23:44:59.874309 kernel: psci: Trusted OS migration not required Sep 4 23:44:59.874315 kernel: psci: SMC Calling Convention v1.1 Sep 4 23:44:59.874322 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 4 23:44:59.874328 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 4 23:44:59.874335 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 4 23:44:59.874341 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 4 23:44:59.874348 kernel: Detected PIPT I-cache on CPU0 Sep 4 23:44:59.874354 kernel: CPU features: detected: GIC system register CPU interface Sep 4 23:44:59.874361 kernel: CPU features: detected: Hardware dirty bit management Sep 4 23:44:59.874369 kernel: CPU features: detected: Spectre-v4 Sep 4 23:44:59.874375 kernel: CPU features: detected: Spectre-BHB Sep 4 23:44:59.874382 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 4 23:44:59.874388 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 4 23:44:59.874395 kernel: CPU features: detected: ARM erratum 1418040 Sep 4 23:44:59.874401 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 4 23:44:59.874407 kernel: alternatives: applying boot alternatives Sep 4 23:44:59.874415 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=0304960b24e314f6095f7d8ad705a9bc0a9a4a34f7817da10ea634466a73d86e Sep 4 23:44:59.874422 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 23:44:59.874428 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 23:44:59.874435 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 23:44:59.874443 kernel: Fallback order for Node 0: 0 Sep 4 23:44:59.874449 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Sep 4 23:44:59.874456 kernel: Policy zone: Normal Sep 4 23:44:59.874462 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 23:44:59.874469 kernel: software IO TLB: area num 2. Sep 4 23:44:59.874476 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Sep 4 23:44:59.874483 kernel: Memory: 3883768K/4096000K available (10368K kernel code, 2186K rwdata, 8104K rodata, 38400K init, 897K bss, 212232K reserved, 0K cma-reserved) Sep 4 23:44:59.874489 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 23:44:59.874496 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 23:44:59.874503 kernel: rcu: RCU event tracing is enabled. Sep 4 23:44:59.874510 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 23:44:59.874516 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 23:44:59.874524 kernel: Tracing variant of Tasks RCU enabled. Sep 4 23:44:59.874531 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 23:44:59.874538 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 23:44:59.874544 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 4 23:44:59.874551 kernel: GICv3: 256 SPIs implemented Sep 4 23:44:59.874557 kernel: GICv3: 0 Extended SPIs implemented Sep 4 23:44:59.874564 kernel: Root IRQ handler: gic_handle_irq Sep 4 23:44:59.874570 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 4 23:44:59.874577 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 4 23:44:59.874638 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 4 23:44:59.874645 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Sep 4 23:44:59.874655 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Sep 4 23:44:59.874662 kernel: GICv3: using LPI property table @0x00000001000e0000 Sep 4 23:44:59.874669 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Sep 4 23:44:59.874675 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 23:44:59.874682 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 23:44:59.874689 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 4 23:44:59.874695 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 4 23:44:59.874702 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 4 23:44:59.874708 kernel: Console: colour dummy device 80x25 Sep 4 23:44:59.874715 kernel: ACPI: Core revision 20230628 Sep 4 23:44:59.874723 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 4 23:44:59.874731 kernel: pid_max: default: 32768 minimum: 301 Sep 4 23:44:59.874738 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 4 23:44:59.874745 kernel: landlock: Up and running. Sep 4 23:44:59.874751 kernel: SELinux: Initializing. Sep 4 23:44:59.874758 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 23:44:59.874765 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 23:44:59.874772 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 4 23:44:59.874779 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 4 23:44:59.874785 kernel: rcu: Hierarchical SRCU implementation. Sep 4 23:44:59.874794 kernel: rcu: Max phase no-delay instances is 400. Sep 4 23:44:59.874801 kernel: Platform MSI: ITS@0x8080000 domain created Sep 4 23:44:59.874808 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 4 23:44:59.874814 kernel: Remapping and enabling EFI services. Sep 4 23:44:59.874821 kernel: smp: Bringing up secondary CPUs ... Sep 4 23:44:59.874828 kernel: Detected PIPT I-cache on CPU1 Sep 4 23:44:59.874835 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 4 23:44:59.874841 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Sep 4 23:44:59.874848 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 23:44:59.874855 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 4 23:44:59.874863 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 23:44:59.874876 kernel: SMP: Total of 2 processors activated. Sep 4 23:44:59.874884 kernel: CPU features: detected: 32-bit EL0 Support Sep 4 23:44:59.874891 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 4 23:44:59.874899 kernel: CPU features: detected: Common not Private translations Sep 4 23:44:59.874906 kernel: CPU features: detected: CRC32 instructions Sep 4 23:44:59.874913 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 4 23:44:59.874920 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 4 23:44:59.874929 kernel: CPU features: detected: LSE atomic instructions Sep 4 23:44:59.874936 kernel: CPU features: detected: Privileged Access Never Sep 4 23:44:59.874943 kernel: CPU features: detected: RAS Extension Support Sep 4 23:44:59.874950 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 4 23:44:59.874957 kernel: CPU: All CPU(s) started at EL1 Sep 4 23:44:59.874964 kernel: alternatives: applying system-wide alternatives Sep 4 23:44:59.874971 kernel: devtmpfs: initialized Sep 4 23:44:59.874978 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 23:44:59.874987 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 23:44:59.874994 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 23:44:59.875001 kernel: SMBIOS 3.0.0 present. Sep 4 23:44:59.875008 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Sep 4 23:44:59.875016 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 23:44:59.875023 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 4 23:44:59.875030 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 4 23:44:59.875038 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 4 23:44:59.875045 kernel: audit: initializing netlink subsys (disabled) Sep 4 23:44:59.875054 kernel: audit: type=2000 audit(0.012:1): state=initialized audit_enabled=0 res=1 Sep 4 23:44:59.875061 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 23:44:59.875068 kernel: cpuidle: using governor menu Sep 4 23:44:59.875075 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 4 23:44:59.875082 kernel: ASID allocator initialised with 32768 entries Sep 4 23:44:59.875090 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 23:44:59.875097 kernel: Serial: AMBA PL011 UART driver Sep 4 23:44:59.875104 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 4 23:44:59.875111 kernel: Modules: 0 pages in range for non-PLT usage Sep 4 23:44:59.875120 kernel: Modules: 509248 pages in range for PLT usage Sep 4 23:44:59.875127 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 23:44:59.875134 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 23:44:59.875141 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 4 23:44:59.875148 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 4 23:44:59.875155 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 23:44:59.875162 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 23:44:59.875169 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 4 23:44:59.875176 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 4 23:44:59.875184 kernel: ACPI: Added _OSI(Module Device) Sep 4 23:44:59.875192 kernel: ACPI: Added _OSI(Processor Device) Sep 4 23:44:59.875199 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 23:44:59.875206 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 23:44:59.875213 kernel: ACPI: Interpreter enabled Sep 4 23:44:59.875220 kernel: ACPI: Using GIC for interrupt routing Sep 4 23:44:59.875227 kernel: ACPI: MCFG table detected, 1 entries Sep 4 23:44:59.875234 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 4 23:44:59.875242 kernel: printk: console [ttyAMA0] enabled Sep 4 23:44:59.875448 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 23:44:59.877744 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 23:44:59.877871 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 4 23:44:59.877939 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 4 23:44:59.878003 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 4 23:44:59.878066 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 4 23:44:59.878075 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 4 23:44:59.878087 kernel: PCI host bridge to bus 0000:00 Sep 4 23:44:59.878753 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 4 23:44:59.878859 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 4 23:44:59.878923 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 4 23:44:59.878983 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 23:44:59.879073 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 4 23:44:59.879158 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Sep 4 23:44:59.879237 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Sep 4 23:44:59.879308 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Sep 4 23:44:59.879391 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Sep 4 23:44:59.879462 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Sep 4 23:44:59.879539 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Sep 4 23:44:59.880766 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Sep 4 23:44:59.880892 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Sep 4 23:44:59.880963 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Sep 4 23:44:59.881037 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Sep 4 23:44:59.881104 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Sep 4 23:44:59.881176 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Sep 4 23:44:59.881243 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Sep 4 23:44:59.881320 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Sep 4 23:44:59.881387 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Sep 4 23:44:59.881462 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Sep 4 23:44:59.881527 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Sep 4 23:44:59.881633 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Sep 4 23:44:59.881710 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Sep 4 23:44:59.881811 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Sep 4 23:44:59.881880 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Sep 4 23:44:59.881964 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Sep 4 23:44:59.882045 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Sep 4 23:44:59.882127 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Sep 4 23:44:59.882212 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Sep 4 23:44:59.882285 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 4 23:44:59.882353 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Sep 4 23:44:59.882429 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Sep 4 23:44:59.882496 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Sep 4 23:44:59.882572 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Sep 4 23:44:59.883895 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Sep 4 23:44:59.883978 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Sep 4 23:44:59.884064 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Sep 4 23:44:59.884133 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Sep 4 23:44:59.884209 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Sep 4 23:44:59.884276 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Sep 4 23:44:59.884351 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Sep 4 23:44:59.884420 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Sep 4 23:44:59.884486 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Sep 4 23:44:59.884564 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Sep 4 23:44:59.887824 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Sep 4 23:44:59.887919 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Sep 4 23:44:59.887988 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Sep 4 23:44:59.888060 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Sep 4 23:44:59.888126 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Sep 4 23:44:59.888198 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Sep 4 23:44:59.888268 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Sep 4 23:44:59.888333 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Sep 4 23:44:59.888399 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Sep 4 23:44:59.888468 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Sep 4 23:44:59.888532 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Sep 4 23:44:59.888638 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Sep 4 23:44:59.888727 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Sep 4 23:44:59.888795 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Sep 4 23:44:59.888860 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Sep 4 23:44:59.888931 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Sep 4 23:44:59.888998 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Sep 4 23:44:59.889063 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Sep 4 23:44:59.889134 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Sep 4 23:44:59.889199 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Sep 4 23:44:59.889267 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Sep 4 23:44:59.889335 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 4 23:44:59.889402 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Sep 4 23:44:59.889466 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Sep 4 23:44:59.889535 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 4 23:44:59.889674 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Sep 4 23:44:59.889747 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Sep 4 23:44:59.889824 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 4 23:44:59.889891 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Sep 4 23:44:59.889955 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Sep 4 23:44:59.890021 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Sep 4 23:44:59.890086 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Sep 4 23:44:59.890152 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Sep 4 23:44:59.890217 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Sep 4 23:44:59.890285 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Sep 4 23:44:59.890350 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Sep 4 23:44:59.890417 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Sep 4 23:44:59.890483 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Sep 4 23:44:59.890558 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Sep 4 23:44:59.891101 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Sep 4 23:44:59.891183 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Sep 4 23:44:59.891255 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 4 23:44:59.891322 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Sep 4 23:44:59.891387 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 4 23:44:59.891453 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Sep 4 23:44:59.891518 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 4 23:44:59.892646 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Sep 4 23:44:59.892751 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Sep 4 23:44:59.892822 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Sep 4 23:44:59.893029 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Sep 4 23:44:59.893121 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Sep 4 23:44:59.893187 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Sep 4 23:44:59.893255 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Sep 4 23:44:59.893320 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Sep 4 23:44:59.893411 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Sep 4 23:44:59.893489 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Sep 4 23:44:59.893558 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Sep 4 23:44:59.894441 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Sep 4 23:44:59.894522 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Sep 4 23:44:59.894603 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Sep 4 23:44:59.894697 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Sep 4 23:44:59.894764 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Sep 4 23:44:59.894830 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Sep 4 23:44:59.894903 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Sep 4 23:44:59.895386 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Sep 4 23:44:59.895454 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Sep 4 23:44:59.895521 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Sep 4 23:44:59.895868 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Sep 4 23:44:59.895977 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Sep 4 23:44:59.896054 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Sep 4 23:44:59.896123 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 4 23:44:59.896198 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Sep 4 23:44:59.896265 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Sep 4 23:44:59.896329 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Sep 4 23:44:59.896391 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Sep 4 23:44:59.896457 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Sep 4 23:44:59.896528 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Sep 4 23:44:59.896645 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Sep 4 23:44:59.896728 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Sep 4 23:44:59.896794 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Sep 4 23:44:59.896860 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Sep 4 23:44:59.896931 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Sep 4 23:44:59.896998 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Sep 4 23:44:59.897068 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Sep 4 23:44:59.897133 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Sep 4 23:44:59.897196 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Sep 4 23:44:59.897260 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Sep 4 23:44:59.897333 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Sep 4 23:44:59.897400 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Sep 4 23:44:59.897464 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Sep 4 23:44:59.897528 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Sep 4 23:44:59.897682 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Sep 4 23:44:59.897772 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Sep 4 23:44:59.897840 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Sep 4 23:44:59.897904 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Sep 4 23:44:59.897966 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Sep 4 23:44:59.898029 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Sep 4 23:44:59.898099 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Sep 4 23:44:59.898165 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Sep 4 23:44:59.898238 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Sep 4 23:44:59.898302 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Sep 4 23:44:59.898364 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Sep 4 23:44:59.898428 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 4 23:44:59.898498 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Sep 4 23:44:59.898564 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Sep 4 23:44:59.900800 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Sep 4 23:44:59.900892 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Sep 4 23:44:59.900967 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Sep 4 23:44:59.901033 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Sep 4 23:44:59.901097 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 4 23:44:59.901165 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Sep 4 23:44:59.901232 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Sep 4 23:44:59.901298 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Sep 4 23:44:59.901366 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 4 23:44:59.901439 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Sep 4 23:44:59.901510 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Sep 4 23:44:59.901574 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Sep 4 23:44:59.903885 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Sep 4 23:44:59.903961 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 4 23:44:59.904022 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 4 23:44:59.904080 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 4 23:44:59.904154 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Sep 4 23:44:59.904236 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Sep 4 23:44:59.904296 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Sep 4 23:44:59.904365 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Sep 4 23:44:59.904425 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Sep 4 23:44:59.904486 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Sep 4 23:44:59.904553 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Sep 4 23:44:59.904661 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Sep 4 23:44:59.904733 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Sep 4 23:44:59.904815 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Sep 4 23:44:59.904878 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Sep 4 23:44:59.904939 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Sep 4 23:44:59.905008 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Sep 4 23:44:59.905069 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Sep 4 23:44:59.905131 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Sep 4 23:44:59.905202 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Sep 4 23:44:59.905263 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Sep 4 23:44:59.905325 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 4 23:44:59.905394 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Sep 4 23:44:59.906300 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Sep 4 23:44:59.906522 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 4 23:44:59.906721 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Sep 4 23:44:59.906796 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Sep 4 23:44:59.906859 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 4 23:44:59.906933 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Sep 4 23:44:59.907003 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Sep 4 23:44:59.907794 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Sep 4 23:44:59.907817 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 4 23:44:59.907825 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 4 23:44:59.907832 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 4 23:44:59.907840 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 4 23:44:59.907848 kernel: iommu: Default domain type: Translated Sep 4 23:44:59.907856 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 4 23:44:59.907863 kernel: efivars: Registered efivars operations Sep 4 23:44:59.907878 kernel: vgaarb: loaded Sep 4 23:44:59.907885 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 4 23:44:59.907893 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 23:44:59.907901 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 23:44:59.907909 kernel: pnp: PnP ACPI init Sep 4 23:44:59.907996 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 4 23:44:59.908008 kernel: pnp: PnP ACPI: found 1 devices Sep 4 23:44:59.908015 kernel: NET: Registered PF_INET protocol family Sep 4 23:44:59.908025 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 23:44:59.908033 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 23:44:59.908041 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 23:44:59.908049 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 23:44:59.908057 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 23:44:59.908064 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 23:44:59.908072 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 23:44:59.908080 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 23:44:59.908087 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 23:44:59.908168 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Sep 4 23:44:59.908180 kernel: PCI: CLS 0 bytes, default 64 Sep 4 23:44:59.908187 kernel: kvm [1]: HYP mode not available Sep 4 23:44:59.908195 kernel: Initialise system trusted keyrings Sep 4 23:44:59.908203 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 23:44:59.908210 kernel: Key type asymmetric registered Sep 4 23:44:59.908218 kernel: Asymmetric key parser 'x509' registered Sep 4 23:44:59.908225 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 4 23:44:59.908233 kernel: io scheduler mq-deadline registered Sep 4 23:44:59.908243 kernel: io scheduler kyber registered Sep 4 23:44:59.908251 kernel: io scheduler bfq registered Sep 4 23:44:59.908260 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 4 23:44:59.908329 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Sep 4 23:44:59.908394 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Sep 4 23:44:59.908458 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 4 23:44:59.908525 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Sep 4 23:44:59.909780 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Sep 4 23:44:59.909882 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 4 23:44:59.909954 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Sep 4 23:44:59.910021 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Sep 4 23:44:59.910087 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 4 23:44:59.910156 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Sep 4 23:44:59.910229 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Sep 4 23:44:59.910296 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 4 23:44:59.910365 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Sep 4 23:44:59.910430 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Sep 4 23:44:59.910494 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 4 23:44:59.910564 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Sep 4 23:44:59.910919 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Sep 4 23:44:59.910995 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 4 23:44:59.911065 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Sep 4 23:44:59.911131 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Sep 4 23:44:59.911196 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 4 23:44:59.911910 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Sep 4 23:44:59.912017 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Sep 4 23:44:59.912083 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 4 23:44:59.912094 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Sep 4 23:44:59.912159 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Sep 4 23:44:59.912225 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Sep 4 23:44:59.912290 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 4 23:44:59.912303 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 4 23:44:59.912311 kernel: ACPI: button: Power Button [PWRB] Sep 4 23:44:59.912318 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 4 23:44:59.912392 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Sep 4 23:44:59.912468 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Sep 4 23:44:59.912479 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 23:44:59.912486 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 4 23:44:59.913721 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Sep 4 23:44:59.913748 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Sep 4 23:44:59.913760 kernel: thunder_xcv, ver 1.0 Sep 4 23:44:59.913768 kernel: thunder_bgx, ver 1.0 Sep 4 23:44:59.913776 kernel: nicpf, ver 1.0 Sep 4 23:44:59.913784 kernel: nicvf, ver 1.0 Sep 4 23:44:59.913891 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 4 23:44:59.913957 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-04T23:44:59 UTC (1757029499) Sep 4 23:44:59.913967 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 4 23:44:59.913976 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 4 23:44:59.913986 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 4 23:44:59.913993 kernel: watchdog: Hard watchdog permanently disabled Sep 4 23:44:59.914001 kernel: NET: Registered PF_INET6 protocol family Sep 4 23:44:59.914031 kernel: Segment Routing with IPv6 Sep 4 23:44:59.914039 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 23:44:59.914046 kernel: NET: Registered PF_PACKET protocol family Sep 4 23:44:59.914053 kernel: Key type dns_resolver registered Sep 4 23:44:59.914061 kernel: registered taskstats version 1 Sep 4 23:44:59.914069 kernel: Loading compiled-in X.509 certificates Sep 4 23:44:59.914080 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.103-flatcar: 83306acb9da7bc81cc6aa49a1c622f78672939c0' Sep 4 23:44:59.914087 kernel: Key type .fscrypt registered Sep 4 23:44:59.914095 kernel: Key type fscrypt-provisioning registered Sep 4 23:44:59.914102 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 23:44:59.914110 kernel: ima: Allocated hash algorithm: sha1 Sep 4 23:44:59.914117 kernel: ima: No architecture policies found Sep 4 23:44:59.914125 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 4 23:44:59.914132 kernel: clk: Disabling unused clocks Sep 4 23:44:59.914140 kernel: Freeing unused kernel memory: 38400K Sep 4 23:44:59.914148 kernel: Run /init as init process Sep 4 23:44:59.914156 kernel: with arguments: Sep 4 23:44:59.914164 kernel: /init Sep 4 23:44:59.914173 kernel: with environment: Sep 4 23:44:59.914180 kernel: HOME=/ Sep 4 23:44:59.914187 kernel: TERM=linux Sep 4 23:44:59.914195 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 23:44:59.914203 systemd[1]: Successfully made /usr/ read-only. Sep 4 23:44:59.914214 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 23:44:59.914225 systemd[1]: Detected virtualization kvm. Sep 4 23:44:59.914233 systemd[1]: Detected architecture arm64. Sep 4 23:44:59.914241 systemd[1]: Running in initrd. Sep 4 23:44:59.914248 systemd[1]: No hostname configured, using default hostname. Sep 4 23:44:59.914256 systemd[1]: Hostname set to . Sep 4 23:44:59.914264 systemd[1]: Initializing machine ID from VM UUID. Sep 4 23:44:59.914272 systemd[1]: Queued start job for default target initrd.target. Sep 4 23:44:59.914282 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:44:59.914291 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:44:59.914300 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 23:44:59.914308 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 23:44:59.914316 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 23:44:59.914325 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 23:44:59.914334 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 23:44:59.914344 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 23:44:59.914352 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:44:59.914360 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:44:59.914368 systemd[1]: Reached target paths.target - Path Units. Sep 4 23:44:59.914376 systemd[1]: Reached target slices.target - Slice Units. Sep 4 23:44:59.914384 systemd[1]: Reached target swap.target - Swaps. Sep 4 23:44:59.914392 systemd[1]: Reached target timers.target - Timer Units. Sep 4 23:44:59.914400 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 23:44:59.914410 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 23:44:59.914418 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 23:44:59.914426 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 4 23:44:59.914434 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:44:59.914442 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 23:44:59.914450 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:44:59.914458 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 23:44:59.914466 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 23:44:59.914474 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 23:44:59.914484 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 23:44:59.914492 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 23:44:59.914500 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 23:44:59.914509 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 23:44:59.914520 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:44:59.914529 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 23:44:59.914539 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:44:59.914594 systemd-journald[236]: Collecting audit messages is disabled. Sep 4 23:44:59.914678 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 23:44:59.914692 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 23:44:59.914700 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 23:44:59.914708 kernel: Bridge firewalling registered Sep 4 23:44:59.914716 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 23:44:59.914725 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:44:59.914733 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:44:59.914741 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 23:44:59.914750 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 23:44:59.914763 systemd-journald[236]: Journal started Sep 4 23:44:59.914783 systemd-journald[236]: Runtime Journal (/run/log/journal/4bd5a2404fdb46debdd68cb686a5a7f0) is 8M, max 76.6M, 68.6M free. Sep 4 23:44:59.877017 systemd-modules-load[237]: Inserted module 'overlay' Sep 4 23:44:59.893543 systemd-modules-load[237]: Inserted module 'br_netfilter' Sep 4 23:44:59.921588 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 23:44:59.921646 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 23:44:59.922415 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:44:59.928882 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 23:44:59.935845 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:44:59.944044 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:44:59.951024 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 23:44:59.953631 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:44:59.956878 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 23:44:59.972637 dracut-cmdline[278]: dracut-dracut-053 Sep 4 23:44:59.975950 dracut-cmdline[278]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=0304960b24e314f6095f7d8ad705a9bc0a9a4a34f7817da10ea634466a73d86e Sep 4 23:44:59.980546 systemd-resolved[273]: Positive Trust Anchors: Sep 4 23:44:59.980566 systemd-resolved[273]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 23:44:59.980744 systemd-resolved[273]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 23:44:59.986796 systemd-resolved[273]: Defaulting to hostname 'linux'. Sep 4 23:44:59.987936 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 23:44:59.991448 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:45:00.054634 kernel: SCSI subsystem initialized Sep 4 23:45:00.058659 kernel: Loading iSCSI transport class v2.0-870. Sep 4 23:45:00.066657 kernel: iscsi: registered transport (tcp) Sep 4 23:45:00.081067 kernel: iscsi: registered transport (qla4xxx) Sep 4 23:45:00.081155 kernel: QLogic iSCSI HBA Driver Sep 4 23:45:00.122875 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 23:45:00.129852 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 23:45:00.149867 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 23:45:00.149943 kernel: device-mapper: uevent: version 1.0.3 Sep 4 23:45:00.149955 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 23:45:00.197650 kernel: raid6: neonx8 gen() 15517 MB/s Sep 4 23:45:00.214657 kernel: raid6: neonx4 gen() 15597 MB/s Sep 4 23:45:00.231655 kernel: raid6: neonx2 gen() 13129 MB/s Sep 4 23:45:00.248651 kernel: raid6: neonx1 gen() 10464 MB/s Sep 4 23:45:00.265617 kernel: raid6: int64x8 gen() 6763 MB/s Sep 4 23:45:00.282654 kernel: raid6: int64x4 gen() 7313 MB/s Sep 4 23:45:00.299640 kernel: raid6: int64x2 gen() 6079 MB/s Sep 4 23:45:00.316650 kernel: raid6: int64x1 gen() 5039 MB/s Sep 4 23:45:00.316698 kernel: raid6: using algorithm neonx4 gen() 15597 MB/s Sep 4 23:45:00.333641 kernel: raid6: .... xor() 12443 MB/s, rmw enabled Sep 4 23:45:00.333686 kernel: raid6: using neon recovery algorithm Sep 4 23:45:00.338730 kernel: xor: measuring software checksum speed Sep 4 23:45:00.338773 kernel: 8regs : 21579 MB/sec Sep 4 23:45:00.338793 kernel: 32regs : 21704 MB/sec Sep 4 23:45:00.339746 kernel: arm64_neon : 27936 MB/sec Sep 4 23:45:00.339807 kernel: xor: using function: arm64_neon (27936 MB/sec) Sep 4 23:45:00.389654 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 23:45:00.403450 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 23:45:00.410894 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:45:00.426307 systemd-udevd[459]: Using default interface naming scheme 'v255'. Sep 4 23:45:00.430344 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:45:00.440826 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 23:45:00.453297 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Sep 4 23:45:00.484293 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 23:45:00.495335 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 23:45:00.548685 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:45:00.557430 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 23:45:00.573316 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 23:45:00.574482 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 23:45:00.575277 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:45:00.577886 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 23:45:00.588943 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 23:45:00.599437 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 23:45:00.660059 kernel: scsi host0: Virtio SCSI HBA Sep 4 23:45:00.664698 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 4 23:45:00.664730 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Sep 4 23:45:00.681388 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 23:45:00.681530 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:45:00.684315 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 23:45:00.689652 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:45:00.689881 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:45:00.691214 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:45:00.699844 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:45:00.718676 kernel: sr 0:0:0:0: Power-on or device reset occurred Sep 4 23:45:00.721876 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:45:00.725195 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Sep 4 23:45:00.725396 kernel: ACPI: bus type USB registered Sep 4 23:45:00.725409 kernel: usbcore: registered new interface driver usbfs Sep 4 23:45:00.725425 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 4 23:45:00.726607 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Sep 4 23:45:00.727854 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 23:45:00.729660 kernel: usbcore: registered new interface driver hub Sep 4 23:45:00.730608 kernel: usbcore: registered new device driver usb Sep 4 23:45:00.745115 kernel: sd 0:0:0:1: Power-on or device reset occurred Sep 4 23:45:00.745331 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Sep 4 23:45:00.746713 kernel: sd 0:0:0:1: [sda] Write Protect is off Sep 4 23:45:00.748474 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Sep 4 23:45:00.748688 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 4 23:45:00.756833 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 23:45:00.756897 kernel: GPT:17805311 != 80003071 Sep 4 23:45:00.756908 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 23:45:00.756917 kernel: GPT:17805311 != 80003071 Sep 4 23:45:00.756926 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 23:45:00.757595 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 23:45:00.758594 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Sep 4 23:45:00.761041 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:45:00.776100 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 4 23:45:00.776303 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Sep 4 23:45:00.777604 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Sep 4 23:45:00.779767 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 4 23:45:00.779949 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Sep 4 23:45:00.780697 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Sep 4 23:45:00.782851 kernel: hub 1-0:1.0: USB hub found Sep 4 23:45:00.783074 kernel: hub 1-0:1.0: 4 ports detected Sep 4 23:45:00.784833 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Sep 4 23:45:00.785045 kernel: hub 2-0:1.0: USB hub found Sep 4 23:45:00.785930 kernel: hub 2-0:1.0: 4 ports detected Sep 4 23:45:00.818605 kernel: BTRFS: device fsid 74a5374f-334b-4c07-8952-82f9f0ad22ae devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (504) Sep 4 23:45:00.820785 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by (udev-worker) (528) Sep 4 23:45:00.825859 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Sep 4 23:45:00.838030 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Sep 4 23:45:00.862433 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Sep 4 23:45:00.863248 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Sep 4 23:45:00.873520 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 4 23:45:00.888902 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 23:45:00.897616 disk-uuid[578]: Primary Header is updated. Sep 4 23:45:00.897616 disk-uuid[578]: Secondary Entries is updated. Sep 4 23:45:00.897616 disk-uuid[578]: Secondary Header is updated. Sep 4 23:45:00.904682 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 23:45:00.908600 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 23:45:01.024938 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Sep 4 23:45:01.159641 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Sep 4 23:45:01.159728 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Sep 4 23:45:01.160076 kernel: usbcore: registered new interface driver usbhid Sep 4 23:45:01.160102 kernel: usbhid: USB HID core driver Sep 4 23:45:01.265766 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Sep 4 23:45:01.396636 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Sep 4 23:45:01.450711 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Sep 4 23:45:01.915657 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 23:45:01.917681 disk-uuid[579]: The operation has completed successfully. Sep 4 23:45:01.992006 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 23:45:01.992884 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 23:45:02.015931 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 23:45:02.022357 sh[593]: Success Sep 4 23:45:02.034794 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 4 23:45:02.105205 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 23:45:02.107980 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 23:45:02.116809 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 23:45:02.132667 kernel: BTRFS info (device dm-0): first mount of filesystem 74a5374f-334b-4c07-8952-82f9f0ad22ae Sep 4 23:45:02.132757 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 4 23:45:02.132775 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 23:45:02.133784 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 23:45:02.133828 kernel: BTRFS info (device dm-0): using free space tree Sep 4 23:45:02.142635 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 4 23:45:02.146039 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 23:45:02.146720 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 23:45:02.161983 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 23:45:02.168887 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 23:45:02.186612 kernel: BTRFS info (device sda6): first mount of filesystem 6280ecaa-ba8f-4e5e-8483-db3a07084cf9 Sep 4 23:45:02.186726 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 23:45:02.186748 kernel: BTRFS info (device sda6): using free space tree Sep 4 23:45:02.192647 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 4 23:45:02.192722 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 23:45:02.199667 kernel: BTRFS info (device sda6): last unmount of filesystem 6280ecaa-ba8f-4e5e-8483-db3a07084cf9 Sep 4 23:45:02.202872 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 23:45:02.211329 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 23:45:02.310863 ignition[665]: Ignition 2.20.0 Sep 4 23:45:02.310876 ignition[665]: Stage: fetch-offline Sep 4 23:45:02.310932 ignition[665]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:45:02.310944 ignition[665]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 4 23:45:02.311133 ignition[665]: parsed url from cmdline: "" Sep 4 23:45:02.313649 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 23:45:02.311136 ignition[665]: no config URL provided Sep 4 23:45:02.311141 ignition[665]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 23:45:02.311148 ignition[665]: no config at "/usr/lib/ignition/user.ign" Sep 4 23:45:02.311154 ignition[665]: failed to fetch config: resource requires networking Sep 4 23:45:02.311428 ignition[665]: Ignition finished successfully Sep 4 23:45:02.337064 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 23:45:02.345994 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 23:45:02.371529 systemd-networkd[778]: lo: Link UP Sep 4 23:45:02.372153 systemd-networkd[778]: lo: Gained carrier Sep 4 23:45:02.374114 systemd-networkd[778]: Enumeration completed Sep 4 23:45:02.374637 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:45:02.374653 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 23:45:02.375362 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:45:02.375366 systemd-networkd[778]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 23:45:02.375795 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 23:45:02.376664 systemd[1]: Reached target network.target - Network. Sep 4 23:45:02.377492 systemd-networkd[778]: eth0: Link UP Sep 4 23:45:02.377496 systemd-networkd[778]: eth0: Gained carrier Sep 4 23:45:02.377506 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:45:02.378539 systemd-networkd[778]: eth1: Link UP Sep 4 23:45:02.378542 systemd-networkd[778]: eth1: Gained carrier Sep 4 23:45:02.378552 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:45:02.389570 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 23:45:02.401350 ignition[781]: Ignition 2.20.0 Sep 4 23:45:02.401361 ignition[781]: Stage: fetch Sep 4 23:45:02.401536 ignition[781]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:45:02.401546 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 4 23:45:02.401681 ignition[781]: parsed url from cmdline: "" Sep 4 23:45:02.401684 ignition[781]: no config URL provided Sep 4 23:45:02.401689 ignition[781]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 23:45:02.401697 ignition[781]: no config at "/usr/lib/ignition/user.ign" Sep 4 23:45:02.401784 ignition[781]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Sep 4 23:45:02.402671 ignition[781]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Sep 4 23:45:02.412724 systemd-networkd[778]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 4 23:45:02.440712 systemd-networkd[778]: eth0: DHCPv4 address 88.198.151.158/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 4 23:45:02.603818 ignition[781]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Sep 4 23:45:02.611177 ignition[781]: GET result: OK Sep 4 23:45:02.611257 ignition[781]: parsing config with SHA512: 821135efed81b31a2f51ca204b6590c3242c40c629ba62b6553a8fef36f019d1360f2be838830972a3b10722ee5409f0cbfd91e19188949d4cd2bc410e184bdc Sep 4 23:45:02.616461 unknown[781]: fetched base config from "system" Sep 4 23:45:02.616471 unknown[781]: fetched base config from "system" Sep 4 23:45:02.616879 ignition[781]: fetch: fetch complete Sep 4 23:45:02.616476 unknown[781]: fetched user config from "hetzner" Sep 4 23:45:02.616885 ignition[781]: fetch: fetch passed Sep 4 23:45:02.619384 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 23:45:02.616933 ignition[781]: Ignition finished successfully Sep 4 23:45:02.624856 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 23:45:02.639602 ignition[788]: Ignition 2.20.0 Sep 4 23:45:02.639637 ignition[788]: Stage: kargs Sep 4 23:45:02.639826 ignition[788]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:45:02.639837 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 4 23:45:02.640864 ignition[788]: kargs: kargs passed Sep 4 23:45:02.640924 ignition[788]: Ignition finished successfully Sep 4 23:45:02.645106 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 23:45:02.652080 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 23:45:02.664114 ignition[796]: Ignition 2.20.0 Sep 4 23:45:02.664124 ignition[796]: Stage: disks Sep 4 23:45:02.664308 ignition[796]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:45:02.664318 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 4 23:45:02.665294 ignition[796]: disks: disks passed Sep 4 23:45:02.665345 ignition[796]: Ignition finished successfully Sep 4 23:45:02.667169 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 23:45:02.668786 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 23:45:02.670010 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 23:45:02.671303 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 23:45:02.672292 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 23:45:02.673297 systemd[1]: Reached target basic.target - Basic System. Sep 4 23:45:02.680771 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 23:45:02.701263 systemd-fsck[804]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 4 23:45:02.705485 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 23:45:02.715886 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 23:45:02.767637 kernel: EXT4-fs (sda9): mounted filesystem 22b06923-f972-4753-b92e-d6b25ef15ca3 r/w with ordered data mode. Quota mode: none. Sep 4 23:45:02.767765 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 23:45:02.768914 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 23:45:02.786795 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 23:45:02.792141 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 23:45:02.794793 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 4 23:45:02.795342 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 23:45:02.795372 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 23:45:02.805734 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (812) Sep 4 23:45:02.810680 kernel: BTRFS info (device sda6): first mount of filesystem 6280ecaa-ba8f-4e5e-8483-db3a07084cf9 Sep 4 23:45:02.810772 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 23:45:02.810792 kernel: BTRFS info (device sda6): using free space tree Sep 4 23:45:02.812630 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 4 23:45:02.812694 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 23:45:02.816137 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 23:45:02.817754 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 23:45:02.829912 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 23:45:02.866809 coreos-metadata[814]: Sep 04 23:45:02.866 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Sep 4 23:45:02.868659 coreos-metadata[814]: Sep 04 23:45:02.868 INFO Fetch successful Sep 4 23:45:02.871597 coreos-metadata[814]: Sep 04 23:45:02.870 INFO wrote hostname ci-4230-2-2-n-5840999b78 to /sysroot/etc/hostname Sep 4 23:45:02.878885 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 23:45:02.876158 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 4 23:45:02.884752 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Sep 4 23:45:02.890538 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 23:45:02.894535 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 23:45:02.999319 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 23:45:03.009448 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 23:45:03.013499 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 23:45:03.023686 kernel: BTRFS info (device sda6): last unmount of filesystem 6280ecaa-ba8f-4e5e-8483-db3a07084cf9 Sep 4 23:45:03.040314 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 23:45:03.055665 ignition[931]: INFO : Ignition 2.20.0 Sep 4 23:45:03.055665 ignition[931]: INFO : Stage: mount Sep 4 23:45:03.055665 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:45:03.055665 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 4 23:45:03.060949 ignition[931]: INFO : mount: mount passed Sep 4 23:45:03.060949 ignition[931]: INFO : Ignition finished successfully Sep 4 23:45:03.060195 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 23:45:03.066732 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 23:45:03.131882 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 23:45:03.141022 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 23:45:03.153192 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (942) Sep 4 23:45:03.153270 kernel: BTRFS info (device sda6): first mount of filesystem 6280ecaa-ba8f-4e5e-8483-db3a07084cf9 Sep 4 23:45:03.153298 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 23:45:03.153964 kernel: BTRFS info (device sda6): using free space tree Sep 4 23:45:03.157623 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 4 23:45:03.157690 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 23:45:03.161099 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 23:45:03.200434 ignition[960]: INFO : Ignition 2.20.0 Sep 4 23:45:03.200434 ignition[960]: INFO : Stage: files Sep 4 23:45:03.202506 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:45:03.202506 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 4 23:45:03.202506 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Sep 4 23:45:03.206297 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 23:45:03.206297 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 23:45:03.207889 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 23:45:03.207889 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 23:45:03.210109 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 23:45:03.209855 unknown[960]: wrote ssh authorized keys file for user: core Sep 4 23:45:03.215968 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 4 23:45:03.215968 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 4 23:45:03.362532 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 23:45:03.701796 systemd-networkd[778]: eth1: Gained IPv6LL Sep 4 23:45:03.846259 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 4 23:45:03.846259 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 23:45:03.848644 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 4 23:45:04.060200 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 4 23:45:04.143757 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 23:45:04.143757 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 4 23:45:04.143757 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 23:45:04.143757 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 23:45:04.143757 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 23:45:04.143757 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 23:45:04.143757 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 23:45:04.143757 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 23:45:04.143757 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 23:45:04.153119 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 23:45:04.153119 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 23:45:04.153119 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 4 23:45:04.153119 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 4 23:45:04.153119 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 4 23:45:04.153119 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 4 23:45:04.278302 systemd-networkd[778]: eth0: Gained IPv6LL Sep 4 23:45:04.401457 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 4 23:45:04.619465 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 4 23:45:04.619465 ignition[960]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 4 23:45:04.623700 ignition[960]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 23:45:04.623700 ignition[960]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 23:45:04.623700 ignition[960]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 4 23:45:04.623700 ignition[960]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 4 23:45:04.623700 ignition[960]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 4 23:45:04.623700 ignition[960]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 4 23:45:04.623700 ignition[960]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 4 23:45:04.623700 ignition[960]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Sep 4 23:45:04.623700 ignition[960]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 23:45:04.623700 ignition[960]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 23:45:04.623700 ignition[960]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 23:45:04.623700 ignition[960]: INFO : files: files passed Sep 4 23:45:04.623700 ignition[960]: INFO : Ignition finished successfully Sep 4 23:45:04.623491 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 23:45:04.633880 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 23:45:04.639089 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 23:45:04.642539 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 23:45:04.643156 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 23:45:04.659778 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:45:04.661288 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:45:04.662837 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:45:04.665784 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 23:45:04.666746 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 23:45:04.673934 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 23:45:04.712349 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 23:45:04.712503 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 23:45:04.714279 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 23:45:04.715027 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 23:45:04.716408 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 23:45:04.729052 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 23:45:04.746998 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 23:45:04.752864 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 23:45:04.765575 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:45:04.766952 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:45:04.768222 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 23:45:04.768769 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 23:45:04.768899 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 23:45:04.770270 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 23:45:04.771531 systemd[1]: Stopped target basic.target - Basic System. Sep 4 23:45:04.772593 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 23:45:04.773721 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 23:45:04.774930 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 23:45:04.776128 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 23:45:04.777192 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 23:45:04.778239 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 23:45:04.779247 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 23:45:04.780137 systemd[1]: Stopped target swap.target - Swaps. Sep 4 23:45:04.780983 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 23:45:04.781111 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 23:45:04.783022 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:45:04.783647 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:45:04.784665 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 23:45:04.786651 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:45:04.787272 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 23:45:04.787403 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 23:45:04.788914 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 23:45:04.789028 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 23:45:04.790207 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 23:45:04.790299 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 23:45:04.791215 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 4 23:45:04.791305 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 4 23:45:04.803971 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 23:45:04.805114 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 23:45:04.805374 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:45:04.812023 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 23:45:04.812527 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 23:45:04.812710 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:45:04.813367 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 23:45:04.813457 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 23:45:04.824464 ignition[1011]: INFO : Ignition 2.20.0 Sep 4 23:45:04.824464 ignition[1011]: INFO : Stage: umount Sep 4 23:45:04.824464 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:45:04.824464 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 4 23:45:04.824328 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 23:45:04.831318 ignition[1011]: INFO : umount: umount passed Sep 4 23:45:04.831318 ignition[1011]: INFO : Ignition finished successfully Sep 4 23:45:04.824629 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 23:45:04.830036 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 23:45:04.830152 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 23:45:04.833711 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 23:45:04.833838 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 23:45:04.835225 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 23:45:04.835280 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 23:45:04.837036 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 23:45:04.837659 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 23:45:04.838577 systemd[1]: Stopped target network.target - Network. Sep 4 23:45:04.839496 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 23:45:04.839559 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 23:45:04.840499 systemd[1]: Stopped target paths.target - Path Units. Sep 4 23:45:04.841763 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 23:45:04.845825 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:45:04.847314 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 23:45:04.847886 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 23:45:04.848924 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 23:45:04.848977 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 23:45:04.849804 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 23:45:04.849839 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 23:45:04.850721 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 23:45:04.850777 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 23:45:04.851661 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 23:45:04.851710 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 23:45:04.852822 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 23:45:04.853552 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 23:45:04.857526 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 23:45:04.859064 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 23:45:04.859187 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 23:45:04.860328 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 23:45:04.860431 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 23:45:04.863098 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 23:45:04.863282 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 23:45:04.866995 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 4 23:45:04.867710 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 23:45:04.867818 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:45:04.871676 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 4 23:45:04.871984 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 23:45:04.872737 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 23:45:04.878503 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 4 23:45:04.879667 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 23:45:04.879768 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:45:04.889185 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 23:45:04.893947 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 23:45:04.894052 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 23:45:04.895398 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 23:45:04.895446 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:45:04.897017 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 23:45:04.897070 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 23:45:04.898124 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:45:04.901922 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 4 23:45:04.911280 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 23:45:04.911450 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 23:45:04.916448 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 23:45:04.916703 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:45:04.918749 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 23:45:04.918805 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 23:45:04.921045 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 23:45:04.921093 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:45:04.922087 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 23:45:04.922144 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 23:45:04.923710 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 23:45:04.923767 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 23:45:04.925004 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 23:45:04.925044 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:45:04.931848 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 23:45:04.932364 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 23:45:04.932436 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:45:04.934804 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:45:04.934864 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:45:04.943777 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 23:45:04.943872 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 23:45:04.948221 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 23:45:04.952871 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 23:45:04.962251 systemd[1]: Switching root. Sep 4 23:45:05.006456 systemd-journald[236]: Journal stopped Sep 4 23:45:06.020465 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Sep 4 23:45:06.020531 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 23:45:06.020545 kernel: SELinux: policy capability open_perms=1 Sep 4 23:45:06.020554 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 23:45:06.020564 kernel: SELinux: policy capability always_check_network=0 Sep 4 23:45:06.020573 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 23:45:06.027900 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 23:45:06.027939 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 23:45:06.027952 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 23:45:06.027968 systemd[1]: Successfully loaded SELinux policy in 34.292ms. Sep 4 23:45:06.027991 kernel: audit: type=1403 audit(1757029505.153:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 23:45:06.028005 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.368ms. Sep 4 23:45:06.028017 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 23:45:06.028027 systemd[1]: Detected virtualization kvm. Sep 4 23:45:06.028038 systemd[1]: Detected architecture arm64. Sep 4 23:45:06.028048 systemd[1]: Detected first boot. Sep 4 23:45:06.030370 systemd[1]: Hostname set to . Sep 4 23:45:06.030410 systemd[1]: Initializing machine ID from VM UUID. Sep 4 23:45:06.030429 zram_generator::config[1058]: No configuration found. Sep 4 23:45:06.030442 kernel: NET: Registered PF_VSOCK protocol family Sep 4 23:45:06.030453 systemd[1]: Populated /etc with preset unit settings. Sep 4 23:45:06.030466 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 4 23:45:06.030476 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 23:45:06.030486 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 23:45:06.030496 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 23:45:06.030506 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 23:45:06.030519 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 23:45:06.030529 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 23:45:06.030539 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 23:45:06.030555 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 23:45:06.030569 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 23:45:06.030595 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 23:45:06.030650 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 23:45:06.030664 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:45:06.030675 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:45:06.030689 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 23:45:06.030699 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 23:45:06.030711 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 23:45:06.030721 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 23:45:06.030731 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 4 23:45:06.030742 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:45:06.030754 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 23:45:06.030764 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 23:45:06.030774 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 23:45:06.030785 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 23:45:06.030794 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:45:06.030805 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 23:45:06.030815 systemd[1]: Reached target slices.target - Slice Units. Sep 4 23:45:06.030825 systemd[1]: Reached target swap.target - Swaps. Sep 4 23:45:06.030835 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 23:45:06.030847 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 23:45:06.030859 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 4 23:45:06.030870 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:45:06.030880 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 23:45:06.030891 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:45:06.030901 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 23:45:06.030911 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 23:45:06.030929 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 23:45:06.030944 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 23:45:06.030956 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 23:45:06.030966 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 23:45:06.030977 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 23:45:06.030987 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 23:45:06.030999 systemd[1]: Reached target machines.target - Containers. Sep 4 23:45:06.031011 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 23:45:06.031022 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:45:06.031032 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 23:45:06.031042 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 23:45:06.031053 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 23:45:06.031063 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 23:45:06.031078 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 23:45:06.031088 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 23:45:06.031100 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 23:45:06.031113 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 23:45:06.031123 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 23:45:06.031133 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 23:45:06.031145 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 23:45:06.031155 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 23:45:06.031166 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:45:06.031181 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 23:45:06.031191 kernel: fuse: init (API version 7.39) Sep 4 23:45:06.031202 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 23:45:06.031213 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 23:45:06.031224 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 23:45:06.031234 kernel: ACPI: bus type drm_connector registered Sep 4 23:45:06.031244 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 4 23:45:06.031255 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 23:45:06.031265 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 23:45:06.031275 systemd[1]: Stopped verity-setup.service. Sep 4 23:45:06.031285 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 23:45:06.031295 kernel: loop: module loaded Sep 4 23:45:06.031304 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 23:45:06.031315 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 23:45:06.031326 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 23:45:06.031336 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 23:45:06.031348 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 23:45:06.031359 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:45:06.031370 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 23:45:06.031381 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 23:45:06.031391 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 23:45:06.031403 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 23:45:06.031413 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 23:45:06.031423 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 23:45:06.031433 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 23:45:06.031443 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 23:45:06.031453 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 23:45:06.031463 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 23:45:06.031474 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 23:45:06.031484 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 23:45:06.031495 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 23:45:06.031508 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 23:45:06.031518 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 23:45:06.031528 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 23:45:06.031539 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 23:45:06.031550 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 23:45:06.031560 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 4 23:45:06.031570 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 23:45:06.035745 systemd-journald[1126]: Collecting audit messages is disabled. Sep 4 23:45:06.035788 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 23:45:06.035808 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:45:06.035820 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 23:45:06.035831 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 23:45:06.035841 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 23:45:06.035852 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 23:45:06.035866 systemd-journald[1126]: Journal started Sep 4 23:45:06.035894 systemd-journald[1126]: Runtime Journal (/run/log/journal/4bd5a2404fdb46debdd68cb686a5a7f0) is 8M, max 76.6M, 68.6M free. Sep 4 23:45:05.700775 systemd[1]: Queued start job for default target multi-user.target. Sep 4 23:45:05.713093 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 4 23:45:05.713633 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 23:45:06.046007 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:45:06.049865 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 23:45:06.056213 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 23:45:06.058642 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 23:45:06.060731 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 23:45:06.063106 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 4 23:45:06.064201 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 23:45:06.067887 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 23:45:06.072918 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 23:45:06.077270 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:45:06.110864 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 23:45:06.119022 kernel: loop0: detected capacity change from 0 to 113512 Sep 4 23:45:06.116143 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 23:45:06.119290 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 23:45:06.131121 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 23:45:06.137714 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 4 23:45:06.144981 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 23:45:06.151721 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 23:45:06.154659 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:45:06.159681 systemd-journald[1126]: Time spent on flushing to /var/log/journal/4bd5a2404fdb46debdd68cb686a5a7f0 is 55.986ms for 1145 entries. Sep 4 23:45:06.159681 systemd-journald[1126]: System Journal (/var/log/journal/4bd5a2404fdb46debdd68cb686a5a7f0) is 8M, max 584.8M, 576.8M free. Sep 4 23:45:06.233553 systemd-journald[1126]: Received client request to flush runtime journal. Sep 4 23:45:06.234089 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 23:45:06.234126 kernel: loop1: detected capacity change from 0 to 207008 Sep 4 23:45:06.202843 udevadm[1188]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 4 23:45:06.237773 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 23:45:06.249747 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 23:45:06.261917 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 23:45:06.264126 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 4 23:45:06.275862 kernel: loop2: detected capacity change from 0 to 123192 Sep 4 23:45:06.300920 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Sep 4 23:45:06.300939 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Sep 4 23:45:06.306509 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:45:06.308704 kernel: loop3: detected capacity change from 0 to 8 Sep 4 23:45:06.332081 kernel: loop4: detected capacity change from 0 to 113512 Sep 4 23:45:06.361951 kernel: loop5: detected capacity change from 0 to 207008 Sep 4 23:45:06.384628 kernel: loop6: detected capacity change from 0 to 123192 Sep 4 23:45:06.406623 kernel: loop7: detected capacity change from 0 to 8 Sep 4 23:45:06.408854 (sd-merge)[1203]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Sep 4 23:45:06.409499 (sd-merge)[1203]: Merged extensions into '/usr'. Sep 4 23:45:06.416519 systemd[1]: Reload requested from client PID 1159 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 23:45:06.417200 systemd[1]: Reloading... Sep 4 23:45:06.507664 zram_generator::config[1228]: No configuration found. Sep 4 23:45:06.594395 ldconfig[1155]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 23:45:06.675413 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:45:06.737035 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 23:45:06.737684 systemd[1]: Reloading finished in 319 ms. Sep 4 23:45:06.754931 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 23:45:06.757398 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 23:45:06.769975 systemd[1]: Starting ensure-sysext.service... Sep 4 23:45:06.778978 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 23:45:06.794266 systemd[1]: Reload requested from client PID 1268 ('systemctl') (unit ensure-sysext.service)... Sep 4 23:45:06.794312 systemd[1]: Reloading... Sep 4 23:45:06.812676 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 23:45:06.812904 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 23:45:06.813538 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 23:45:06.816959 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Sep 4 23:45:06.817027 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Sep 4 23:45:06.826344 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 23:45:06.826360 systemd-tmpfiles[1269]: Skipping /boot Sep 4 23:45:06.844754 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 23:45:06.844775 systemd-tmpfiles[1269]: Skipping /boot Sep 4 23:45:06.885634 zram_generator::config[1298]: No configuration found. Sep 4 23:45:06.988080 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:45:07.049916 systemd[1]: Reloading finished in 255 ms. Sep 4 23:45:07.066617 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 23:45:07.078665 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:45:07.091074 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 23:45:07.095919 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 23:45:07.099831 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 23:45:07.104926 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 23:45:07.119927 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:45:07.126661 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 23:45:07.129523 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:45:07.141932 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 23:45:07.149970 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 23:45:07.154769 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 23:45:07.155471 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:45:07.155614 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:45:07.158872 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 23:45:07.160426 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 23:45:07.161524 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 23:45:07.182079 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 23:45:07.187035 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 23:45:07.188335 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 23:45:07.188694 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 23:45:07.197642 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 23:45:07.199936 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 23:45:07.200151 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 23:45:07.210323 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:45:07.214525 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 23:45:07.216841 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 23:45:07.218780 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:45:07.218923 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:45:07.222203 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:45:07.222822 systemd-udevd[1343]: Using default interface naming scheme 'v255'. Sep 4 23:45:07.234256 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 23:45:07.244939 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 23:45:07.245572 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:45:07.245734 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:45:07.246577 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 23:45:07.251449 systemd[1]: Finished ensure-sysext.service. Sep 4 23:45:07.260907 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 4 23:45:07.263794 augenrules[1379]: No rules Sep 4 23:45:07.266937 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 23:45:07.267148 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 23:45:07.272054 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 23:45:07.272267 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 23:45:07.273383 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 23:45:07.273700 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 23:45:07.279011 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 23:45:07.281266 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 23:45:07.286008 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 23:45:07.286236 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 23:45:07.289413 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 23:45:07.289692 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 23:45:07.290647 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 23:45:07.290711 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 23:45:07.297576 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:45:07.307941 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 23:45:07.320120 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 23:45:07.436263 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 4 23:45:07.462305 systemd-resolved[1341]: Positive Trust Anchors: Sep 4 23:45:07.462332 systemd-resolved[1341]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 23:45:07.462364 systemd-resolved[1341]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 23:45:07.468357 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 4 23:45:07.469306 systemd-resolved[1341]: Using system hostname 'ci-4230-2-2-n-5840999b78'. Sep 4 23:45:07.470863 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 23:45:07.474188 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 23:45:07.475383 systemd-networkd[1394]: lo: Link UP Sep 4 23:45:07.475758 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:45:07.476574 systemd-networkd[1394]: lo: Gained carrier Sep 4 23:45:07.478374 systemd-timesyncd[1380]: No network connectivity, watching for changes. Sep 4 23:45:07.479433 systemd-networkd[1394]: Enumeration completed Sep 4 23:45:07.479553 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 23:45:07.480239 systemd[1]: Reached target network.target - Network. Sep 4 23:45:07.485863 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 4 23:45:07.513118 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 23:45:07.534259 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 4 23:45:07.574564 systemd-networkd[1394]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:45:07.574935 systemd-networkd[1394]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 23:45:07.575575 systemd-networkd[1394]: eth0: Link UP Sep 4 23:45:07.575665 systemd-networkd[1394]: eth0: Gained carrier Sep 4 23:45:07.575687 systemd-networkd[1394]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:45:07.583950 systemd-networkd[1394]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:45:07.584092 systemd-networkd[1394]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 23:45:07.585363 systemd-networkd[1394]: eth1: Link UP Sep 4 23:45:07.585485 systemd-networkd[1394]: eth1: Gained carrier Sep 4 23:45:07.585509 systemd-networkd[1394]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:45:07.597615 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1409) Sep 4 23:45:07.597722 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 23:45:07.609700 systemd-networkd[1394]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 4 23:45:07.610346 systemd-timesyncd[1380]: Network configuration changed, trying to establish connection. Sep 4 23:45:07.620738 systemd-networkd[1394]: eth0: DHCPv4 address 88.198.151.158/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 4 23:45:07.621084 systemd-timesyncd[1380]: Network configuration changed, trying to establish connection. Sep 4 23:45:07.621700 systemd-timesyncd[1380]: Network configuration changed, trying to establish connection. Sep 4 23:45:07.688117 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 4 23:45:07.695813 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 23:45:07.705199 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Sep 4 23:45:07.705297 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 4 23:45:07.705312 kernel: [drm] features: -context_init Sep 4 23:45:07.710198 kernel: [drm] number of scanouts: 1 Sep 4 23:45:07.710320 kernel: [drm] number of cap sets: 0 Sep 4 23:45:07.709530 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Sep 4 23:45:07.711070 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:45:07.713810 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 23:45:07.714652 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Sep 4 23:45:07.717359 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 23:45:07.721727 kernel: Console: switching to colour frame buffer device 160x50 Sep 4 23:45:07.740649 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 4 23:45:07.753072 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 23:45:07.753792 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:45:07.753835 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:45:07.753857 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 23:45:07.754682 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 23:45:07.768000 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 23:45:07.768304 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 23:45:07.769767 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 23:45:07.769990 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 23:45:07.772824 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 23:45:07.773319 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 23:45:07.788917 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 23:45:07.788958 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 23:45:07.793957 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:45:07.866112 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:45:07.925132 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 23:45:07.932923 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 23:45:07.946375 lvm[1462]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 23:45:07.976333 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 23:45:07.978745 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:45:07.979334 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 23:45:07.980073 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 23:45:07.980859 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 23:45:07.981703 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 23:45:07.982313 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 23:45:07.983011 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 23:45:07.983624 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 23:45:07.983658 systemd[1]: Reached target paths.target - Path Units. Sep 4 23:45:07.984090 systemd[1]: Reached target timers.target - Timer Units. Sep 4 23:45:07.986707 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 23:45:07.991041 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 23:45:07.995945 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 4 23:45:07.996889 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 4 23:45:07.997511 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 4 23:45:08.001388 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 23:45:08.002972 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 4 23:45:08.009008 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 23:45:08.010992 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 23:45:08.012003 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 23:45:08.012574 systemd[1]: Reached target basic.target - Basic System. Sep 4 23:45:08.013228 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 23:45:08.013271 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 23:45:08.021848 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 23:45:08.026037 lvm[1466]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 23:45:08.035906 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 4 23:45:08.040691 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 23:45:08.048843 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 23:45:08.052542 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 23:45:08.053180 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 23:45:08.055769 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 23:45:08.058503 jq[1470]: false Sep 4 23:45:08.067761 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 23:45:08.074483 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Sep 4 23:45:08.086900 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 23:45:08.092213 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 23:45:08.103160 extend-filesystems[1473]: Found loop4 Sep 4 23:45:08.103160 extend-filesystems[1473]: Found loop5 Sep 4 23:45:08.103160 extend-filesystems[1473]: Found loop6 Sep 4 23:45:08.103160 extend-filesystems[1473]: Found loop7 Sep 4 23:45:08.103160 extend-filesystems[1473]: Found sda Sep 4 23:45:08.103160 extend-filesystems[1473]: Found sda1 Sep 4 23:45:08.103160 extend-filesystems[1473]: Found sda2 Sep 4 23:45:08.103160 extend-filesystems[1473]: Found sda3 Sep 4 23:45:08.103160 extend-filesystems[1473]: Found usr Sep 4 23:45:08.103160 extend-filesystems[1473]: Found sda4 Sep 4 23:45:08.103160 extend-filesystems[1473]: Found sda6 Sep 4 23:45:08.103160 extend-filesystems[1473]: Found sda7 Sep 4 23:45:08.103160 extend-filesystems[1473]: Found sda9 Sep 4 23:45:08.103160 extend-filesystems[1473]: Checking size of /dev/sda9 Sep 4 23:45:08.175818 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Sep 4 23:45:08.111872 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 23:45:08.127926 dbus-daemon[1469]: [system] SELinux support is enabled Sep 4 23:45:08.184918 extend-filesystems[1473]: Resized partition /dev/sda9 Sep 4 23:45:08.190144 coreos-metadata[1468]: Sep 04 23:45:08.110 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Sep 4 23:45:08.190144 coreos-metadata[1468]: Sep 04 23:45:08.126 INFO Fetch successful Sep 4 23:45:08.190144 coreos-metadata[1468]: Sep 04 23:45:08.126 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Sep 4 23:45:08.190144 coreos-metadata[1468]: Sep 04 23:45:08.145 INFO Fetch successful Sep 4 23:45:08.116218 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 23:45:08.190382 extend-filesystems[1494]: resize2fs 1.47.1 (20-May-2024) Sep 4 23:45:08.211678 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1416) Sep 4 23:45:08.116882 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 23:45:08.121848 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 23:45:08.211992 jq[1493]: true Sep 4 23:45:08.126373 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 23:45:08.129923 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 23:45:08.134892 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 23:45:08.139177 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 23:45:08.139406 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 23:45:08.139789 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 23:45:08.140003 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 23:45:08.166107 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 23:45:08.223453 tar[1499]: linux-arm64/LICENSE Sep 4 23:45:08.223453 tar[1499]: linux-arm64/helm Sep 4 23:45:08.166367 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 23:45:08.208935 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 23:45:08.208964 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 23:45:08.213438 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 23:45:08.213458 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 23:45:08.235382 (ntainerd)[1509]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 23:45:08.240221 jq[1510]: true Sep 4 23:45:08.300796 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Sep 4 23:45:08.310379 update_engine[1490]: I20250904 23:45:08.308219 1490 main.cc:92] Flatcar Update Engine starting Sep 4 23:45:08.315532 extend-filesystems[1494]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Sep 4 23:45:08.315532 extend-filesystems[1494]: old_desc_blocks = 1, new_desc_blocks = 5 Sep 4 23:45:08.315532 extend-filesystems[1494]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Sep 4 23:45:08.334932 extend-filesystems[1473]: Resized filesystem in /dev/sda9 Sep 4 23:45:08.334932 extend-filesystems[1473]: Found sr0 Sep 4 23:45:08.342221 update_engine[1490]: I20250904 23:45:08.327909 1490 update_check_scheduler.cc:74] Next update check in 10m22s Sep 4 23:45:08.318761 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 23:45:08.318996 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 23:45:08.325389 systemd[1]: Started update-engine.service - Update Engine. Sep 4 23:45:08.341234 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 23:45:08.364666 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 4 23:45:08.366126 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 23:45:08.369488 systemd-logind[1483]: New seat seat0. Sep 4 23:45:08.379070 systemd-logind[1483]: Watching system buttons on /dev/input/event0 (Power Button) Sep 4 23:45:08.379094 systemd-logind[1483]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Sep 4 23:45:08.379923 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 23:45:08.389543 bash[1544]: Updated "/home/core/.ssh/authorized_keys" Sep 4 23:45:08.390055 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 23:45:08.420185 systemd[1]: Starting sshkeys.service... Sep 4 23:45:08.442038 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 4 23:45:08.446009 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 4 23:45:08.498833 coreos-metadata[1548]: Sep 04 23:45:08.498 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Sep 4 23:45:08.502824 coreos-metadata[1548]: Sep 04 23:45:08.502 INFO Fetch successful Sep 4 23:45:08.507548 unknown[1548]: wrote ssh authorized keys file for user: core Sep 4 23:45:08.548934 containerd[1509]: time="2025-09-04T23:45:08.548812120Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Sep 4 23:45:08.556751 update-ssh-keys[1553]: Updated "/home/core/.ssh/authorized_keys" Sep 4 23:45:08.557996 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 4 23:45:08.564629 systemd[1]: Finished sshkeys.service. Sep 4 23:45:08.588350 containerd[1509]: time="2025-09-04T23:45:08.588230520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:45:08.590654 containerd[1509]: time="2025-09-04T23:45:08.590357000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.103-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:45:08.590654 containerd[1509]: time="2025-09-04T23:45:08.590396720Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 23:45:08.590654 containerd[1509]: time="2025-09-04T23:45:08.590414840Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 23:45:08.591791 containerd[1509]: time="2025-09-04T23:45:08.590575000Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 23:45:08.591791 containerd[1509]: time="2025-09-04T23:45:08.590838640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 23:45:08.591791 containerd[1509]: time="2025-09-04T23:45:08.590923120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:45:08.591791 containerd[1509]: time="2025-09-04T23:45:08.590936640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:45:08.591791 containerd[1509]: time="2025-09-04T23:45:08.591137360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:45:08.591791 containerd[1509]: time="2025-09-04T23:45:08.591151640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 23:45:08.591791 containerd[1509]: time="2025-09-04T23:45:08.591164120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:45:08.591791 containerd[1509]: time="2025-09-04T23:45:08.591173040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 23:45:08.591791 containerd[1509]: time="2025-09-04T23:45:08.591240480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:45:08.591791 containerd[1509]: time="2025-09-04T23:45:08.591422600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:45:08.591791 containerd[1509]: time="2025-09-04T23:45:08.591544080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:45:08.592069 containerd[1509]: time="2025-09-04T23:45:08.591557600Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 23:45:08.592069 containerd[1509]: time="2025-09-04T23:45:08.591704800Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 23:45:08.592069 containerd[1509]: time="2025-09-04T23:45:08.591755040Z" level=info msg="metadata content store policy set" policy=shared Sep 4 23:45:08.598942 containerd[1509]: time="2025-09-04T23:45:08.598899040Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 23:45:08.599161 containerd[1509]: time="2025-09-04T23:45:08.599132160Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 23:45:08.599277 containerd[1509]: time="2025-09-04T23:45:08.599261000Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 23:45:08.599359 containerd[1509]: time="2025-09-04T23:45:08.599345520Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 23:45:08.599549 containerd[1509]: time="2025-09-04T23:45:08.599530200Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 23:45:08.599880 containerd[1509]: time="2025-09-04T23:45:08.599857480Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 23:45:08.600198 containerd[1509]: time="2025-09-04T23:45:08.600179400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 23:45:08.600641 containerd[1509]: time="2025-09-04T23:45:08.600506440Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 23:45:08.600641 containerd[1509]: time="2025-09-04T23:45:08.600533800Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 23:45:08.600641 containerd[1509]: time="2025-09-04T23:45:08.600550760Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 23:45:08.600641 containerd[1509]: time="2025-09-04T23:45:08.600565240Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 23:45:08.601507 containerd[1509]: time="2025-09-04T23:45:08.600830400Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 23:45:08.601507 containerd[1509]: time="2025-09-04T23:45:08.600869160Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 23:45:08.601507 containerd[1509]: time="2025-09-04T23:45:08.600889240Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 23:45:08.601507 containerd[1509]: time="2025-09-04T23:45:08.600904960Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 23:45:08.601507 containerd[1509]: time="2025-09-04T23:45:08.600919400Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 23:45:08.601507 containerd[1509]: time="2025-09-04T23:45:08.600932960Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 23:45:08.601507 containerd[1509]: time="2025-09-04T23:45:08.600945480Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 23:45:08.601507 containerd[1509]: time="2025-09-04T23:45:08.600967760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 23:45:08.601507 containerd[1509]: time="2025-09-04T23:45:08.600989600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 23:45:08.601507 containerd[1509]: time="2025-09-04T23:45:08.601003560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 23:45:08.601507 containerd[1509]: time="2025-09-04T23:45:08.601017640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 23:45:08.601507 containerd[1509]: time="2025-09-04T23:45:08.601032080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 23:45:08.601507 containerd[1509]: time="2025-09-04T23:45:08.601045760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 23:45:08.601507 containerd[1509]: time="2025-09-04T23:45:08.601058000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 23:45:08.601851 containerd[1509]: time="2025-09-04T23:45:08.601072320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 23:45:08.601851 containerd[1509]: time="2025-09-04T23:45:08.601085920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 23:45:08.601851 containerd[1509]: time="2025-09-04T23:45:08.601104520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 23:45:08.601851 containerd[1509]: time="2025-09-04T23:45:08.601118240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 23:45:08.601851 containerd[1509]: time="2025-09-04T23:45:08.601131960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 23:45:08.601851 containerd[1509]: time="2025-09-04T23:45:08.601145320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 23:45:08.601851 containerd[1509]: time="2025-09-04T23:45:08.601161240Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 23:45:08.601851 containerd[1509]: time="2025-09-04T23:45:08.601188720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 23:45:08.601851 containerd[1509]: time="2025-09-04T23:45:08.601203520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 23:45:08.601851 containerd[1509]: time="2025-09-04T23:45:08.601217440Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 23:45:08.604095 containerd[1509]: time="2025-09-04T23:45:08.603346560Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 23:45:08.604095 containerd[1509]: time="2025-09-04T23:45:08.603476960Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 23:45:08.604095 containerd[1509]: time="2025-09-04T23:45:08.603490960Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 23:45:08.604095 containerd[1509]: time="2025-09-04T23:45:08.603503440Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 23:45:08.604095 containerd[1509]: time="2025-09-04T23:45:08.603512520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 23:45:08.604095 containerd[1509]: time="2025-09-04T23:45:08.603525880Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 23:45:08.604095 containerd[1509]: time="2025-09-04T23:45:08.604016400Z" level=info msg="NRI interface is disabled by configuration." Sep 4 23:45:08.604095 containerd[1509]: time="2025-09-04T23:45:08.604039000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 23:45:08.605328 containerd[1509]: time="2025-09-04T23:45:08.604728720Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 23:45:08.605328 containerd[1509]: time="2025-09-04T23:45:08.604786320Z" level=info msg="Connect containerd service" Sep 4 23:45:08.605328 containerd[1509]: time="2025-09-04T23:45:08.604838880Z" level=info msg="using legacy CRI server" Sep 4 23:45:08.605328 containerd[1509]: time="2025-09-04T23:45:08.604846840Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 23:45:08.605328 containerd[1509]: time="2025-09-04T23:45:08.605113800Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 23:45:08.606609 containerd[1509]: time="2025-09-04T23:45:08.606256840Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 23:45:08.606609 containerd[1509]: time="2025-09-04T23:45:08.606432360Z" level=info msg="Start subscribing containerd event" Sep 4 23:45:08.606609 containerd[1509]: time="2025-09-04T23:45:08.606476200Z" level=info msg="Start recovering state" Sep 4 23:45:08.606609 containerd[1509]: time="2025-09-04T23:45:08.606542120Z" level=info msg="Start event monitor" Sep 4 23:45:08.606609 containerd[1509]: time="2025-09-04T23:45:08.606553320Z" level=info msg="Start snapshots syncer" Sep 4 23:45:08.606609 containerd[1509]: time="2025-09-04T23:45:08.606562440Z" level=info msg="Start cni network conf syncer for default" Sep 4 23:45:08.606609 containerd[1509]: time="2025-09-04T23:45:08.606570880Z" level=info msg="Start streaming server" Sep 4 23:45:08.607756 containerd[1509]: time="2025-09-04T23:45:08.607734480Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 23:45:08.607975 containerd[1509]: time="2025-09-04T23:45:08.607957200Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 23:45:08.608100 containerd[1509]: time="2025-09-04T23:45:08.608087520Z" level=info msg="containerd successfully booted in 0.062786s" Sep 4 23:45:08.608197 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 23:45:08.615276 locksmithd[1539]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 23:45:08.757719 systemd-networkd[1394]: eth0: Gained IPv6LL Sep 4 23:45:08.758247 systemd-timesyncd[1380]: Network configuration changed, trying to establish connection. Sep 4 23:45:08.764143 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 23:45:08.767259 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 23:45:08.775829 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:08.779521 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 23:45:08.832225 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 23:45:08.888700 systemd-networkd[1394]: eth1: Gained IPv6LL Sep 4 23:45:08.889147 systemd-timesyncd[1380]: Network configuration changed, trying to establish connection. Sep 4 23:45:08.951400 sshd_keygen[1503]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 23:45:08.980288 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 23:45:08.988331 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 23:45:08.999396 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 23:45:09.000107 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 23:45:09.007964 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 23:45:09.013235 tar[1499]: linux-arm64/README.md Sep 4 23:45:09.027007 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 23:45:09.030629 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 23:45:09.040033 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 23:45:09.044017 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 4 23:45:09.044860 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 23:45:09.651250 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:09.652553 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 23:45:09.656735 systemd[1]: Startup finished in 778ms (kernel) + 5.470s (initrd) + 4.537s (userspace) = 10.786s. Sep 4 23:45:09.660437 (kubelet)[1601]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:45:10.138337 kubelet[1601]: E0904 23:45:10.138158 1601 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:45:10.142141 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:45:10.142325 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:45:10.143208 systemd[1]: kubelet.service: Consumed 843ms CPU time, 257.9M memory peak. Sep 4 23:45:20.150387 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 23:45:20.163955 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:20.292159 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:20.306453 (kubelet)[1620]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:45:20.363652 kubelet[1620]: E0904 23:45:20.363531 1620 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:45:20.366379 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:45:20.366698 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:45:20.366989 systemd[1]: kubelet.service: Consumed 179ms CPU time, 106.8M memory peak. Sep 4 23:45:30.400386 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 23:45:30.410984 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:30.542681 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:30.550940 (kubelet)[1636]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:45:30.594609 kubelet[1636]: E0904 23:45:30.593859 1636 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:45:30.597142 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:45:30.597432 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:45:30.598081 systemd[1]: kubelet.service: Consumed 152ms CPU time, 107.5M memory peak. Sep 4 23:45:38.948851 systemd-timesyncd[1380]: Contacted time server 85.215.189.120:123 (2.flatcar.pool.ntp.org). Sep 4 23:45:38.949051 systemd-timesyncd[1380]: Initial clock synchronization to Thu 2025-09-04 23:45:38.819795 UTC. Sep 4 23:45:40.650705 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 4 23:45:40.660094 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:40.795408 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:40.808128 (kubelet)[1650]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:45:40.848625 kubelet[1650]: E0904 23:45:40.848233 1650 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:45:40.851187 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:45:40.851405 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:45:40.851787 systemd[1]: kubelet.service: Consumed 154ms CPU time, 106.9M memory peak. Sep 4 23:45:44.836731 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 23:45:44.843103 systemd[1]: Started sshd@0-88.198.151.158:22-139.178.68.195:44104.service - OpenSSH per-connection server daemon (139.178.68.195:44104). Sep 4 23:45:45.909291 sshd[1659]: Accepted publickey for core from 139.178.68.195 port 44104 ssh2: RSA SHA256:mO6YHl7qCkvXk9I2QzSjJf9VN7vVqy+ZQWo85qMF4pQ Sep 4 23:45:45.913178 sshd-session[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:45.922910 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 23:45:45.935828 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 23:45:45.944753 systemd-logind[1483]: New session 1 of user core. Sep 4 23:45:45.953852 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 23:45:45.962205 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 23:45:45.967335 (systemd)[1663]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 23:45:45.971565 systemd-logind[1483]: New session c1 of user core. Sep 4 23:45:46.106471 systemd[1663]: Queued start job for default target default.target. Sep 4 23:45:46.114714 systemd[1663]: Created slice app.slice - User Application Slice. Sep 4 23:45:46.114781 systemd[1663]: Reached target paths.target - Paths. Sep 4 23:45:46.114994 systemd[1663]: Reached target timers.target - Timers. Sep 4 23:45:46.116615 systemd[1663]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 23:45:46.130786 systemd[1663]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 23:45:46.130910 systemd[1663]: Reached target sockets.target - Sockets. Sep 4 23:45:46.130961 systemd[1663]: Reached target basic.target - Basic System. Sep 4 23:45:46.130992 systemd[1663]: Reached target default.target - Main User Target. Sep 4 23:45:46.131018 systemd[1663]: Startup finished in 150ms. Sep 4 23:45:46.131192 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 23:45:46.135829 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 23:45:46.867370 systemd[1]: Started sshd@1-88.198.151.158:22-139.178.68.195:44108.service - OpenSSH per-connection server daemon (139.178.68.195:44108). Sep 4 23:45:47.861550 sshd[1674]: Accepted publickey for core from 139.178.68.195 port 44108 ssh2: RSA SHA256:mO6YHl7qCkvXk9I2QzSjJf9VN7vVqy+ZQWo85qMF4pQ Sep 4 23:45:47.863620 sshd-session[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:47.868700 systemd-logind[1483]: New session 2 of user core. Sep 4 23:45:47.880183 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 23:45:48.547851 sshd[1676]: Connection closed by 139.178.68.195 port 44108 Sep 4 23:45:48.548874 sshd-session[1674]: pam_unix(sshd:session): session closed for user core Sep 4 23:45:48.553606 systemd[1]: sshd@1-88.198.151.158:22-139.178.68.195:44108.service: Deactivated successfully. Sep 4 23:45:48.555173 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 23:45:48.556962 systemd-logind[1483]: Session 2 logged out. Waiting for processes to exit. Sep 4 23:45:48.559343 systemd-logind[1483]: Removed session 2. Sep 4 23:45:48.730110 systemd[1]: Started sshd@2-88.198.151.158:22-139.178.68.195:44120.service - OpenSSH per-connection server daemon (139.178.68.195:44120). Sep 4 23:45:49.728212 sshd[1682]: Accepted publickey for core from 139.178.68.195 port 44120 ssh2: RSA SHA256:mO6YHl7qCkvXk9I2QzSjJf9VN7vVqy+ZQWo85qMF4pQ Sep 4 23:45:49.730413 sshd-session[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:49.735344 systemd-logind[1483]: New session 3 of user core. Sep 4 23:45:49.745928 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 23:45:50.411775 sshd[1684]: Connection closed by 139.178.68.195 port 44120 Sep 4 23:45:50.412763 sshd-session[1682]: pam_unix(sshd:session): session closed for user core Sep 4 23:45:50.419076 systemd-logind[1483]: Session 3 logged out. Waiting for processes to exit. Sep 4 23:45:50.420423 systemd[1]: sshd@2-88.198.151.158:22-139.178.68.195:44120.service: Deactivated successfully. Sep 4 23:45:50.422657 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 23:45:50.424184 systemd-logind[1483]: Removed session 3. Sep 4 23:45:50.610893 systemd[1]: Started sshd@3-88.198.151.158:22-139.178.68.195:46166.service - OpenSSH per-connection server daemon (139.178.68.195:46166). Sep 4 23:45:50.901081 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 4 23:45:50.907899 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:51.044779 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:51.053996 (kubelet)[1700]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:45:51.103267 kubelet[1700]: E0904 23:45:51.103149 1700 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:45:51.105526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:45:51.105697 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:45:51.106223 systemd[1]: kubelet.service: Consumed 164ms CPU time, 107.2M memory peak. Sep 4 23:45:51.663890 sshd[1690]: Accepted publickey for core from 139.178.68.195 port 46166 ssh2: RSA SHA256:mO6YHl7qCkvXk9I2QzSjJf9VN7vVqy+ZQWo85qMF4pQ Sep 4 23:45:51.666511 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:51.678701 systemd-logind[1483]: New session 4 of user core. Sep 4 23:45:51.685904 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 23:45:52.389288 sshd[1708]: Connection closed by 139.178.68.195 port 46166 Sep 4 23:45:52.390558 sshd-session[1690]: pam_unix(sshd:session): session closed for user core Sep 4 23:45:52.394786 systemd[1]: sshd@3-88.198.151.158:22-139.178.68.195:46166.service: Deactivated successfully. Sep 4 23:45:52.396742 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 23:45:52.398672 systemd-logind[1483]: Session 4 logged out. Waiting for processes to exit. Sep 4 23:45:52.400246 systemd-logind[1483]: Removed session 4. Sep 4 23:45:52.576015 systemd[1]: Started sshd@4-88.198.151.158:22-139.178.68.195:46174.service - OpenSSH per-connection server daemon (139.178.68.195:46174). Sep 4 23:45:53.645269 sshd[1714]: Accepted publickey for core from 139.178.68.195 port 46174 ssh2: RSA SHA256:mO6YHl7qCkvXk9I2QzSjJf9VN7vVqy+ZQWo85qMF4pQ Sep 4 23:45:53.647748 sshd-session[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:53.654631 systemd-logind[1483]: New session 5 of user core. Sep 4 23:45:53.661926 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 23:45:53.871441 update_engine[1490]: I20250904 23:45:53.871244 1490 update_attempter.cc:509] Updating boot flags... Sep 4 23:45:53.928610 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1726) Sep 4 23:45:53.995919 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1722) Sep 4 23:45:54.209042 sudo[1735]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 23:45:54.209319 sudo[1735]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:45:54.228660 sudo[1735]: pam_unix(sudo:session): session closed for user root Sep 4 23:45:54.400262 sshd[1716]: Connection closed by 139.178.68.195 port 46174 Sep 4 23:45:54.401626 sshd-session[1714]: pam_unix(sshd:session): session closed for user core Sep 4 23:45:54.407325 systemd[1]: sshd@4-88.198.151.158:22-139.178.68.195:46174.service: Deactivated successfully. Sep 4 23:45:54.409411 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 23:45:54.411143 systemd-logind[1483]: Session 5 logged out. Waiting for processes to exit. Sep 4 23:45:54.412503 systemd-logind[1483]: Removed session 5. Sep 4 23:45:54.573118 systemd[1]: Started sshd@5-88.198.151.158:22-139.178.68.195:46176.service - OpenSSH per-connection server daemon (139.178.68.195:46176). Sep 4 23:45:55.575924 sshd[1741]: Accepted publickey for core from 139.178.68.195 port 46176 ssh2: RSA SHA256:mO6YHl7qCkvXk9I2QzSjJf9VN7vVqy+ZQWo85qMF4pQ Sep 4 23:45:55.578475 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:55.587534 systemd-logind[1483]: New session 6 of user core. Sep 4 23:45:55.593008 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 23:45:56.105245 sudo[1745]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 23:45:56.105538 sudo[1745]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:45:56.110192 sudo[1745]: pam_unix(sudo:session): session closed for user root Sep 4 23:45:56.117320 sudo[1744]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 4 23:45:56.117657 sudo[1744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:45:56.134154 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 23:45:56.173156 augenrules[1767]: No rules Sep 4 23:45:56.174940 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 23:45:56.175151 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 23:45:56.176944 sudo[1744]: pam_unix(sudo:session): session closed for user root Sep 4 23:45:56.338681 sshd[1743]: Connection closed by 139.178.68.195 port 46176 Sep 4 23:45:56.339706 sshd-session[1741]: pam_unix(sshd:session): session closed for user core Sep 4 23:45:56.346131 systemd-logind[1483]: Session 6 logged out. Waiting for processes to exit. Sep 4 23:45:56.346298 systemd[1]: sshd@5-88.198.151.158:22-139.178.68.195:46176.service: Deactivated successfully. Sep 4 23:45:56.349457 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 23:45:56.352407 systemd-logind[1483]: Removed session 6. Sep 4 23:45:56.534759 systemd[1]: Started sshd@6-88.198.151.158:22-139.178.68.195:46182.service - OpenSSH per-connection server daemon (139.178.68.195:46182). Sep 4 23:45:57.599299 sshd[1776]: Accepted publickey for core from 139.178.68.195 port 46182 ssh2: RSA SHA256:mO6YHl7qCkvXk9I2QzSjJf9VN7vVqy+ZQWo85qMF4pQ Sep 4 23:45:57.601350 sshd-session[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:57.609468 systemd-logind[1483]: New session 7 of user core. Sep 4 23:45:57.620015 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 23:45:58.158675 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 23:45:58.158982 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:45:58.504042 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 23:45:58.505783 (dockerd)[1796]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 23:45:58.755924 dockerd[1796]: time="2025-09-04T23:45:58.755183820Z" level=info msg="Starting up" Sep 4 23:45:58.853130 dockerd[1796]: time="2025-09-04T23:45:58.853087408Z" level=info msg="Loading containers: start." Sep 4 23:45:59.029635 kernel: Initializing XFRM netlink socket Sep 4 23:45:59.128843 systemd-networkd[1394]: docker0: Link UP Sep 4 23:45:59.170137 dockerd[1796]: time="2025-09-04T23:45:59.170094559Z" level=info msg="Loading containers: done." Sep 4 23:45:59.187674 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1021703398-merged.mount: Deactivated successfully. Sep 4 23:45:59.191208 dockerd[1796]: time="2025-09-04T23:45:59.190617411Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 23:45:59.191208 dockerd[1796]: time="2025-09-04T23:45:59.190748865Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Sep 4 23:45:59.191208 dockerd[1796]: time="2025-09-04T23:45:59.190935737Z" level=info msg="Daemon has completed initialization" Sep 4 23:45:59.232074 dockerd[1796]: time="2025-09-04T23:45:59.231947040Z" level=info msg="API listen on /run/docker.sock" Sep 4 23:45:59.233059 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 23:46:00.282908 containerd[1509]: time="2025-09-04T23:46:00.282865442Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 4 23:46:00.950690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1257573839.mount: Deactivated successfully. Sep 4 23:46:01.150165 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 4 23:46:01.159358 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:46:01.298072 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:46:01.309356 (kubelet)[2030]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:46:01.364506 kubelet[2030]: E0904 23:46:01.364440 2030 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:46:01.367491 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:46:01.367680 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:46:01.369702 systemd[1]: kubelet.service: Consumed 161ms CPU time, 105.3M memory peak. Sep 4 23:46:02.563625 containerd[1509]: time="2025-09-04T23:46:02.561674122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:02.563625 containerd[1509]: time="2025-09-04T23:46:02.563341637Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=26328449" Sep 4 23:46:02.565003 containerd[1509]: time="2025-09-04T23:46:02.564954714Z" level=info msg="ImageCreate event name:\"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:02.570815 containerd[1509]: time="2025-09-04T23:46:02.570752588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:02.572014 containerd[1509]: time="2025-09-04T23:46:02.571950054Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"26325157\" in 2.288508252s" Sep 4 23:46:02.572014 containerd[1509]: time="2025-09-04T23:46:02.572000057Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\"" Sep 4 23:46:02.572805 containerd[1509]: time="2025-09-04T23:46:02.572758651Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 4 23:46:04.143393 containerd[1509]: time="2025-09-04T23:46:04.142199481Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:04.143393 containerd[1509]: time="2025-09-04T23:46:04.143338710Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=22528572" Sep 4 23:46:04.144481 containerd[1509]: time="2025-09-04T23:46:04.144389830Z" level=info msg="ImageCreate event name:\"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:04.149076 containerd[1509]: time="2025-09-04T23:46:04.148995439Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:04.150658 containerd[1509]: time="2025-09-04T23:46:04.149957569Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"24065666\" in 1.577156868s" Sep 4 23:46:04.150658 containerd[1509]: time="2025-09-04T23:46:04.150535799Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\"" Sep 4 23:46:04.152250 containerd[1509]: time="2025-09-04T23:46:04.151920968Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 4 23:46:05.656570 containerd[1509]: time="2025-09-04T23:46:05.656477631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:05.658542 containerd[1509]: time="2025-09-04T23:46:05.657941859Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=17483547" Sep 4 23:46:05.659614 containerd[1509]: time="2025-09-04T23:46:05.659391215Z" level=info msg="ImageCreate event name:\"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:05.663045 containerd[1509]: time="2025-09-04T23:46:05.663001051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:05.664417 containerd[1509]: time="2025-09-04T23:46:05.664375644Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"19020659\" in 1.512415098s" Sep 4 23:46:05.664417 containerd[1509]: time="2025-09-04T23:46:05.664414505Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\"" Sep 4 23:46:05.665804 containerd[1509]: time="2025-09-04T23:46:05.665773186Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 4 23:46:06.546178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3116716264.mount: Deactivated successfully. Sep 4 23:46:06.845782 containerd[1509]: time="2025-09-04T23:46:06.844791027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:06.847292 containerd[1509]: time="2025-09-04T23:46:06.847236318Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=27376750" Sep 4 23:46:06.848489 containerd[1509]: time="2025-09-04T23:46:06.848446269Z" level=info msg="ImageCreate event name:\"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:06.851206 containerd[1509]: time="2025-09-04T23:46:06.851159722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:06.852954 containerd[1509]: time="2025-09-04T23:46:06.852802364Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"27375743\" in 1.186988758s" Sep 4 23:46:06.852954 containerd[1509]: time="2025-09-04T23:46:06.852847744Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\"" Sep 4 23:46:06.853520 containerd[1509]: time="2025-09-04T23:46:06.853451240Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 4 23:46:07.430569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1614149890.mount: Deactivated successfully. Sep 4 23:46:08.114921 containerd[1509]: time="2025-09-04T23:46:08.114817171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:08.116798 containerd[1509]: time="2025-09-04T23:46:08.116739328Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Sep 4 23:46:08.117876 containerd[1509]: time="2025-09-04T23:46:08.117800253Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:08.123007 containerd[1509]: time="2025-09-04T23:46:08.122894228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:08.127628 containerd[1509]: time="2025-09-04T23:46:08.127377967Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.273896138s" Sep 4 23:46:08.127628 containerd[1509]: time="2025-09-04T23:46:08.127436707Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 4 23:46:08.128605 containerd[1509]: time="2025-09-04T23:46:08.128503230Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 4 23:46:08.710724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1260145096.mount: Deactivated successfully. Sep 4 23:46:08.717551 containerd[1509]: time="2025-09-04T23:46:08.717478915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:08.718567 containerd[1509]: time="2025-09-04T23:46:08.718514848Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Sep 4 23:46:08.720586 containerd[1509]: time="2025-09-04T23:46:08.719349249Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:08.722101 containerd[1509]: time="2025-09-04T23:46:08.722062500Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:08.723062 containerd[1509]: time="2025-09-04T23:46:08.723028097Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 594.48692ms" Sep 4 23:46:08.723242 containerd[1509]: time="2025-09-04T23:46:08.723219993Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 4 23:46:08.723942 containerd[1509]: time="2025-09-04T23:46:08.723777606Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 4 23:46:09.311139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount976155267.mount: Deactivated successfully. Sep 4 23:46:11.268867 containerd[1509]: time="2025-09-04T23:46:11.267609103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:11.270225 containerd[1509]: time="2025-09-04T23:46:11.270177570Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943239" Sep 4 23:46:11.271631 containerd[1509]: time="2025-09-04T23:46:11.271596974Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:11.280024 containerd[1509]: time="2025-09-04T23:46:11.279977467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:11.282839 containerd[1509]: time="2025-09-04T23:46:11.282792240Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.558706776s" Sep 4 23:46:11.283017 containerd[1509]: time="2025-09-04T23:46:11.282998994Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 4 23:46:11.399963 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Sep 4 23:46:11.409253 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:46:11.540721 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:46:11.552043 (kubelet)[2193]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:46:11.599604 kubelet[2193]: E0904 23:46:11.599346 2193 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:46:11.602829 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:46:11.603139 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:46:11.603928 systemd[1]: kubelet.service: Consumed 161ms CPU time, 107M memory peak. Sep 4 23:46:16.409969 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:46:16.410944 systemd[1]: kubelet.service: Consumed 161ms CPU time, 107M memory peak. Sep 4 23:46:16.421179 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:46:16.466807 systemd[1]: Reload requested from client PID 2221 ('systemctl') (unit session-7.scope)... Sep 4 23:46:16.466826 systemd[1]: Reloading... Sep 4 23:46:16.592626 zram_generator::config[2272]: No configuration found. Sep 4 23:46:16.697706 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:46:16.790404 systemd[1]: Reloading finished in 323 ms. Sep 4 23:46:16.843116 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:46:16.849543 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:46:16.850544 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 23:46:16.850824 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:46:16.850877 systemd[1]: kubelet.service: Consumed 104ms CPU time, 94.9M memory peak. Sep 4 23:46:16.857381 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:46:16.984986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:46:16.998823 (kubelet)[2316]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 23:46:17.046898 kubelet[2316]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:46:17.046898 kubelet[2316]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 23:46:17.046898 kubelet[2316]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:46:17.047372 kubelet[2316]: I0904 23:46:17.046946 2316 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 23:46:18.063345 kubelet[2316]: I0904 23:46:18.063266 2316 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 4 23:46:18.063345 kubelet[2316]: I0904 23:46:18.063326 2316 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 23:46:18.063897 kubelet[2316]: I0904 23:46:18.063702 2316 server.go:954] "Client rotation is on, will bootstrap in background" Sep 4 23:46:18.092220 kubelet[2316]: E0904 23:46:18.092164 2316 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://88.198.151.158:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 88.198.151.158:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:46:18.094961 kubelet[2316]: I0904 23:46:18.094681 2316 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 23:46:18.103193 kubelet[2316]: E0904 23:46:18.103120 2316 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 4 23:46:18.103193 kubelet[2316]: I0904 23:46:18.103189 2316 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 4 23:46:18.106470 kubelet[2316]: I0904 23:46:18.106440 2316 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 23:46:18.107563 kubelet[2316]: I0904 23:46:18.107485 2316 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 23:46:18.107838 kubelet[2316]: I0904 23:46:18.107549 2316 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-2-2-n-5840999b78","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 23:46:18.107982 kubelet[2316]: I0904 23:46:18.107948 2316 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 23:46:18.107982 kubelet[2316]: I0904 23:46:18.107966 2316 container_manager_linux.go:304] "Creating device plugin manager" Sep 4 23:46:18.108417 kubelet[2316]: I0904 23:46:18.108377 2316 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:46:18.112302 kubelet[2316]: I0904 23:46:18.112110 2316 kubelet.go:446] "Attempting to sync node with API server" Sep 4 23:46:18.112302 kubelet[2316]: I0904 23:46:18.112162 2316 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 23:46:18.112302 kubelet[2316]: I0904 23:46:18.112191 2316 kubelet.go:352] "Adding apiserver pod source" Sep 4 23:46:18.112302 kubelet[2316]: I0904 23:46:18.112207 2316 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 23:46:18.115677 kubelet[2316]: W0904 23:46:18.114776 2316 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://88.198.151.158:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-2-n-5840999b78&limit=500&resourceVersion=0": dial tcp 88.198.151.158:6443: connect: connection refused Sep 4 23:46:18.115677 kubelet[2316]: E0904 23:46:18.114887 2316 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://88.198.151.158:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-2-n-5840999b78&limit=500&resourceVersion=0\": dial tcp 88.198.151.158:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:46:18.115677 kubelet[2316]: W0904 23:46:18.115449 2316 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://88.198.151.158:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 88.198.151.158:6443: connect: connection refused Sep 4 23:46:18.115677 kubelet[2316]: E0904 23:46:18.115514 2316 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://88.198.151.158:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 88.198.151.158:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:46:18.116723 kubelet[2316]: I0904 23:46:18.116698 2316 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 4 23:46:18.117498 kubelet[2316]: I0904 23:46:18.117473 2316 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 23:46:18.117724 kubelet[2316]: W0904 23:46:18.117709 2316 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 23:46:18.119081 kubelet[2316]: I0904 23:46:18.119055 2316 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 23:46:18.119230 kubelet[2316]: I0904 23:46:18.119219 2316 server.go:1287] "Started kubelet" Sep 4 23:46:18.128366 kubelet[2316]: I0904 23:46:18.128333 2316 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 23:46:18.134031 kubelet[2316]: E0904 23:46:18.132738 2316 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://88.198.151.158:6443/api/v1/namespaces/default/events\": dial tcp 88.198.151.158:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-2-2-n-5840999b78.1862391797a61b1e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-2-2-n-5840999b78,UID:ci-4230-2-2-n-5840999b78,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-2-2-n-5840999b78,},FirstTimestamp:2025-09-04 23:46:18.119191326 +0000 UTC m=+1.115571564,LastTimestamp:2025-09-04 23:46:18.119191326 +0000 UTC m=+1.115571564,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-2-2-n-5840999b78,}" Sep 4 23:46:18.136302 kubelet[2316]: I0904 23:46:18.136224 2316 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 23:46:18.138098 kubelet[2316]: I0904 23:46:18.138028 2316 server.go:479] "Adding debug handlers to kubelet server" Sep 4 23:46:18.138849 kubelet[2316]: I0904 23:46:18.138777 2316 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 23:46:18.139199 kubelet[2316]: I0904 23:46:18.139136 2316 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 23:46:18.139353 kubelet[2316]: I0904 23:46:18.139336 2316 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 23:46:18.139512 kubelet[2316]: E0904 23:46:18.139481 2316 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-2-2-n-5840999b78\" not found" Sep 4 23:46:18.140020 kubelet[2316]: I0904 23:46:18.139990 2316 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 23:46:18.142242 kubelet[2316]: E0904 23:46:18.142208 2316 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 23:46:18.142603 kubelet[2316]: I0904 23:46:18.142555 2316 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 23:46:18.142713 kubelet[2316]: I0904 23:46:18.142661 2316 reconciler.go:26] "Reconciler: start to sync state" Sep 4 23:46:18.143523 kubelet[2316]: I0904 23:46:18.143500 2316 factory.go:221] Registration of the systemd container factory successfully Sep 4 23:46:18.143769 kubelet[2316]: I0904 23:46:18.143746 2316 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 23:46:18.144046 kubelet[2316]: E0904 23:46:18.144020 2316 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://88.198.151.158:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-2-n-5840999b78?timeout=10s\": dial tcp 88.198.151.158:6443: connect: connection refused" interval="200ms" Sep 4 23:46:18.146195 kubelet[2316]: W0904 23:46:18.146099 2316 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://88.198.151.158:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 88.198.151.158:6443: connect: connection refused Sep 4 23:46:18.146379 kubelet[2316]: E0904 23:46:18.146346 2316 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://88.198.151.158:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 88.198.151.158:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:46:18.147005 kubelet[2316]: I0904 23:46:18.146985 2316 factory.go:221] Registration of the containerd container factory successfully Sep 4 23:46:18.152601 kubelet[2316]: I0904 23:46:18.151433 2316 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 23:46:18.152722 kubelet[2316]: I0904 23:46:18.152696 2316 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 23:46:18.152748 kubelet[2316]: I0904 23:46:18.152723 2316 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 4 23:46:18.152778 kubelet[2316]: I0904 23:46:18.152745 2316 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 23:46:18.152778 kubelet[2316]: I0904 23:46:18.152758 2316 kubelet.go:2382] "Starting kubelet main sync loop" Sep 4 23:46:18.152823 kubelet[2316]: E0904 23:46:18.152808 2316 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 23:46:18.170118 kubelet[2316]: W0904 23:46:18.170059 2316 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://88.198.151.158:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 88.198.151.158:6443: connect: connection refused Sep 4 23:46:18.170349 kubelet[2316]: E0904 23:46:18.170326 2316 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://88.198.151.158:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 88.198.151.158:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:46:18.175746 kubelet[2316]: I0904 23:46:18.175722 2316 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 23:46:18.175902 kubelet[2316]: I0904 23:46:18.175890 2316 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 23:46:18.175970 kubelet[2316]: I0904 23:46:18.175961 2316 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:46:18.178216 kubelet[2316]: I0904 23:46:18.178182 2316 policy_none.go:49] "None policy: Start" Sep 4 23:46:18.178353 kubelet[2316]: I0904 23:46:18.178341 2316 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 23:46:18.178428 kubelet[2316]: I0904 23:46:18.178418 2316 state_mem.go:35] "Initializing new in-memory state store" Sep 4 23:46:18.187353 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 23:46:18.205314 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 23:46:18.210892 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 23:46:18.223041 kubelet[2316]: I0904 23:46:18.222995 2316 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 23:46:18.223929 kubelet[2316]: I0904 23:46:18.223401 2316 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 23:46:18.223929 kubelet[2316]: I0904 23:46:18.223438 2316 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 23:46:18.223929 kubelet[2316]: I0904 23:46:18.223902 2316 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 23:46:18.226793 kubelet[2316]: E0904 23:46:18.226696 2316 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 23:46:18.226793 kubelet[2316]: E0904 23:46:18.226753 2316 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-2-2-n-5840999b78\" not found" Sep 4 23:46:18.270915 systemd[1]: Created slice kubepods-burstable-pod5b89976acd755330372f010fae06f648.slice - libcontainer container kubepods-burstable-pod5b89976acd755330372f010fae06f648.slice. Sep 4 23:46:18.287740 kubelet[2316]: E0904 23:46:18.287658 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-2-n-5840999b78\" not found" node="ci-4230-2-2-n-5840999b78" Sep 4 23:46:18.289859 systemd[1]: Created slice kubepods-burstable-pod3aa8a97e1bb6f856c7437e81d398f333.slice - libcontainer container kubepods-burstable-pod3aa8a97e1bb6f856c7437e81d398f333.slice. Sep 4 23:46:18.295837 kubelet[2316]: E0904 23:46:18.295632 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-2-n-5840999b78\" not found" node="ci-4230-2-2-n-5840999b78" Sep 4 23:46:18.299296 systemd[1]: Created slice kubepods-burstable-podfbad68ff95b6cf0a4d5c8517337146bd.slice - libcontainer container kubepods-burstable-podfbad68ff95b6cf0a4d5c8517337146bd.slice. Sep 4 23:46:18.302287 kubelet[2316]: E0904 23:46:18.302234 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-2-n-5840999b78\" not found" node="ci-4230-2-2-n-5840999b78" Sep 4 23:46:18.327329 kubelet[2316]: I0904 23:46:18.327124 2316 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-2-n-5840999b78" Sep 4 23:46:18.329253 kubelet[2316]: E0904 23:46:18.329210 2316 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://88.198.151.158:6443/api/v1/nodes\": dial tcp 88.198.151.158:6443: connect: connection refused" node="ci-4230-2-2-n-5840999b78" Sep 4 23:46:18.345248 kubelet[2316]: E0904 23:46:18.345168 2316 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://88.198.151.158:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-2-n-5840999b78?timeout=10s\": dial tcp 88.198.151.158:6443: connect: connection refused" interval="400ms" Sep 4 23:46:18.444177 kubelet[2316]: I0904 23:46:18.443939 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5b89976acd755330372f010fae06f648-k8s-certs\") pod \"kube-apiserver-ci-4230-2-2-n-5840999b78\" (UID: \"5b89976acd755330372f010fae06f648\") " pod="kube-system/kube-apiserver-ci-4230-2-2-n-5840999b78" Sep 4 23:46:18.444177 kubelet[2316]: I0904 23:46:18.443999 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fbad68ff95b6cf0a4d5c8517337146bd-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-2-2-n-5840999b78\" (UID: \"fbad68ff95b6cf0a4d5c8517337146bd\") " pod="kube-system/kube-controller-manager-ci-4230-2-2-n-5840999b78" Sep 4 23:46:18.444177 kubelet[2316]: I0904 23:46:18.444036 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fbad68ff95b6cf0a4d5c8517337146bd-kubeconfig\") pod \"kube-controller-manager-ci-4230-2-2-n-5840999b78\" (UID: \"fbad68ff95b6cf0a4d5c8517337146bd\") " pod="kube-system/kube-controller-manager-ci-4230-2-2-n-5840999b78" Sep 4 23:46:18.444177 kubelet[2316]: I0904 23:46:18.444064 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3aa8a97e1bb6f856c7437e81d398f333-kubeconfig\") pod \"kube-scheduler-ci-4230-2-2-n-5840999b78\" (UID: \"3aa8a97e1bb6f856c7437e81d398f333\") " pod="kube-system/kube-scheduler-ci-4230-2-2-n-5840999b78" Sep 4 23:46:18.444177 kubelet[2316]: I0904 23:46:18.444091 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fbad68ff95b6cf0a4d5c8517337146bd-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-2-2-n-5840999b78\" (UID: \"fbad68ff95b6cf0a4d5c8517337146bd\") " pod="kube-system/kube-controller-manager-ci-4230-2-2-n-5840999b78" Sep 4 23:46:18.444691 kubelet[2316]: I0904 23:46:18.444121 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5b89976acd755330372f010fae06f648-ca-certs\") pod \"kube-apiserver-ci-4230-2-2-n-5840999b78\" (UID: \"5b89976acd755330372f010fae06f648\") " pod="kube-system/kube-apiserver-ci-4230-2-2-n-5840999b78" Sep 4 23:46:18.444691 kubelet[2316]: I0904 23:46:18.444164 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5b89976acd755330372f010fae06f648-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-2-2-n-5840999b78\" (UID: \"5b89976acd755330372f010fae06f648\") " pod="kube-system/kube-apiserver-ci-4230-2-2-n-5840999b78" Sep 4 23:46:18.444691 kubelet[2316]: I0904 23:46:18.444194 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fbad68ff95b6cf0a4d5c8517337146bd-ca-certs\") pod \"kube-controller-manager-ci-4230-2-2-n-5840999b78\" (UID: \"fbad68ff95b6cf0a4d5c8517337146bd\") " pod="kube-system/kube-controller-manager-ci-4230-2-2-n-5840999b78" Sep 4 23:46:18.444691 kubelet[2316]: I0904 23:46:18.444221 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fbad68ff95b6cf0a4d5c8517337146bd-k8s-certs\") pod \"kube-controller-manager-ci-4230-2-2-n-5840999b78\" (UID: \"fbad68ff95b6cf0a4d5c8517337146bd\") " pod="kube-system/kube-controller-manager-ci-4230-2-2-n-5840999b78" Sep 4 23:46:18.532979 kubelet[2316]: I0904 23:46:18.532853 2316 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-2-n-5840999b78" Sep 4 23:46:18.533470 kubelet[2316]: E0904 23:46:18.533354 2316 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://88.198.151.158:6443/api/v1/nodes\": dial tcp 88.198.151.158:6443: connect: connection refused" node="ci-4230-2-2-n-5840999b78" Sep 4 23:46:18.591833 containerd[1509]: time="2025-09-04T23:46:18.591608986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-2-2-n-5840999b78,Uid:5b89976acd755330372f010fae06f648,Namespace:kube-system,Attempt:0,}" Sep 4 23:46:18.598631 containerd[1509]: time="2025-09-04T23:46:18.598202967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-2-2-n-5840999b78,Uid:3aa8a97e1bb6f856c7437e81d398f333,Namespace:kube-system,Attempt:0,}" Sep 4 23:46:18.603646 containerd[1509]: time="2025-09-04T23:46:18.603554377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-2-2-n-5840999b78,Uid:fbad68ff95b6cf0a4d5c8517337146bd,Namespace:kube-system,Attempt:0,}" Sep 4 23:46:18.746064 kubelet[2316]: E0904 23:46:18.745940 2316 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://88.198.151.158:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-2-n-5840999b78?timeout=10s\": dial tcp 88.198.151.158:6443: connect: connection refused" interval="800ms" Sep 4 23:46:18.936540 kubelet[2316]: I0904 23:46:18.935936 2316 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-2-n-5840999b78" Sep 4 23:46:18.936540 kubelet[2316]: E0904 23:46:18.936332 2316 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://88.198.151.158:6443/api/v1/nodes\": dial tcp 88.198.151.158:6443: connect: connection refused" node="ci-4230-2-2-n-5840999b78" Sep 4 23:46:18.942413 kubelet[2316]: W0904 23:46:18.942129 2316 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://88.198.151.158:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-2-n-5840999b78&limit=500&resourceVersion=0": dial tcp 88.198.151.158:6443: connect: connection refused Sep 4 23:46:18.942413 kubelet[2316]: E0904 23:46:18.942223 2316 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://88.198.151.158:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-2-n-5840999b78&limit=500&resourceVersion=0\": dial tcp 88.198.151.158:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:46:19.179923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2180160443.mount: Deactivated successfully. Sep 4 23:46:19.190314 containerd[1509]: time="2025-09-04T23:46:19.189862610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:46:19.193967 containerd[1509]: time="2025-09-04T23:46:19.193837605Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 23:46:19.196020 containerd[1509]: time="2025-09-04T23:46:19.194857574Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:46:19.199099 containerd[1509]: time="2025-09-04T23:46:19.197250075Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:46:19.201287 containerd[1509]: time="2025-09-04T23:46:19.200053700Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Sep 4 23:46:19.203250 containerd[1509]: time="2025-09-04T23:46:19.202388001Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:46:19.206785 containerd[1509]: time="2025-09-04T23:46:19.204640021Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 23:46:19.208702 containerd[1509]: time="2025-09-04T23:46:19.208653736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:46:19.209433 containerd[1509]: time="2025-09-04T23:46:19.209401903Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 616.96751ms" Sep 4 23:46:19.214084 containerd[1509]: time="2025-09-04T23:46:19.214012984Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 610.320046ms" Sep 4 23:46:19.221343 containerd[1509]: time="2025-09-04T23:46:19.221014326Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 622.692678ms" Sep 4 23:46:19.277862 kubelet[2316]: W0904 23:46:19.277774 2316 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://88.198.151.158:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 88.198.151.158:6443: connect: connection refused Sep 4 23:46:19.277862 kubelet[2316]: E0904 23:46:19.277822 2316 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://88.198.151.158:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 88.198.151.158:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:46:19.358447 containerd[1509]: time="2025-09-04T23:46:19.358301623Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:46:19.358447 containerd[1509]: time="2025-09-04T23:46:19.358399263Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:46:19.358705 containerd[1509]: time="2025-09-04T23:46:19.358421904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:19.358705 containerd[1509]: time="2025-09-04T23:46:19.358574505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:19.363783 containerd[1509]: time="2025-09-04T23:46:19.363644550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:46:19.363783 containerd[1509]: time="2025-09-04T23:46:19.363713870Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:46:19.364186 containerd[1509]: time="2025-09-04T23:46:19.364091234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:19.364320 containerd[1509]: time="2025-09-04T23:46:19.364245835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:46:19.364382 containerd[1509]: time="2025-09-04T23:46:19.364324476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:46:19.364382 containerd[1509]: time="2025-09-04T23:46:19.364345276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:19.364465 containerd[1509]: time="2025-09-04T23:46:19.364440677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:19.365314 containerd[1509]: time="2025-09-04T23:46:19.365260884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:19.392936 systemd[1]: Started cri-containerd-0fb450b13a2004d08d3a30be264ba05466ac0e2a43b005d465b21ce0b177c2ab.scope - libcontainer container 0fb450b13a2004d08d3a30be264ba05466ac0e2a43b005d465b21ce0b177c2ab. Sep 4 23:46:19.412872 systemd[1]: Started cri-containerd-448a2a55c181f84bd1e2cff608383a227d7e96381af91ab388d28f1879188256.scope - libcontainer container 448a2a55c181f84bd1e2cff608383a227d7e96381af91ab388d28f1879188256. Sep 4 23:46:19.422234 systemd[1]: Started cri-containerd-fac981530f57a22f600236b5f9c1912166dbbd32f0788870c098aa24afd751fc.scope - libcontainer container fac981530f57a22f600236b5f9c1912166dbbd32f0788870c098aa24afd751fc. Sep 4 23:46:19.444954 kubelet[2316]: W0904 23:46:19.444805 2316 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://88.198.151.158:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 88.198.151.158:6443: connect: connection refused Sep 4 23:46:19.444954 kubelet[2316]: E0904 23:46:19.444883 2316 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://88.198.151.158:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 88.198.151.158:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:46:19.446277 kubelet[2316]: W0904 23:46:19.446091 2316 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://88.198.151.158:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 88.198.151.158:6443: connect: connection refused Sep 4 23:46:19.446384 kubelet[2316]: E0904 23:46:19.446293 2316 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://88.198.151.158:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 88.198.151.158:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:46:19.473720 containerd[1509]: time="2025-09-04T23:46:19.473573244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-2-2-n-5840999b78,Uid:fbad68ff95b6cf0a4d5c8517337146bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"0fb450b13a2004d08d3a30be264ba05466ac0e2a43b005d465b21ce0b177c2ab\"" Sep 4 23:46:19.484194 containerd[1509]: time="2025-09-04T23:46:19.483911816Z" level=info msg="CreateContainer within sandbox \"0fb450b13a2004d08d3a30be264ba05466ac0e2a43b005d465b21ce0b177c2ab\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 23:46:19.505093 containerd[1509]: time="2025-09-04T23:46:19.505042483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-2-2-n-5840999b78,Uid:5b89976acd755330372f010fae06f648,Namespace:kube-system,Attempt:0,} returns sandbox id \"fac981530f57a22f600236b5f9c1912166dbbd32f0788870c098aa24afd751fc\"" Sep 4 23:46:19.511772 containerd[1509]: time="2025-09-04T23:46:19.511721542Z" level=info msg="CreateContainer within sandbox \"fac981530f57a22f600236b5f9c1912166dbbd32f0788870c098aa24afd751fc\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 23:46:19.511918 containerd[1509]: time="2025-09-04T23:46:19.511815423Z" level=info msg="CreateContainer within sandbox \"0fb450b13a2004d08d3a30be264ba05466ac0e2a43b005d465b21ce0b177c2ab\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"85d0bc11df64fe49968372a55c86726b7dce201cb0d610a87a9728466c58bb04\"" Sep 4 23:46:19.513821 containerd[1509]: time="2025-09-04T23:46:19.513774760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-2-2-n-5840999b78,Uid:3aa8a97e1bb6f856c7437e81d398f333,Namespace:kube-system,Attempt:0,} returns sandbox id \"448a2a55c181f84bd1e2cff608383a227d7e96381af91ab388d28f1879188256\"" Sep 4 23:46:19.514416 containerd[1509]: time="2025-09-04T23:46:19.514361606Z" level=info msg="StartContainer for \"85d0bc11df64fe49968372a55c86726b7dce201cb0d610a87a9728466c58bb04\"" Sep 4 23:46:19.519434 containerd[1509]: time="2025-09-04T23:46:19.519133128Z" level=info msg="CreateContainer within sandbox \"448a2a55c181f84bd1e2cff608383a227d7e96381af91ab388d28f1879188256\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 23:46:19.532908 containerd[1509]: time="2025-09-04T23:46:19.532861049Z" level=info msg="CreateContainer within sandbox \"fac981530f57a22f600236b5f9c1912166dbbd32f0788870c098aa24afd751fc\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8f0ac07473351e63fea2bf695b1b18c925b90bdbfefb9c419934778ecefc068e\"" Sep 4 23:46:19.534974 containerd[1509]: time="2025-09-04T23:46:19.533698777Z" level=info msg="StartContainer for \"8f0ac07473351e63fea2bf695b1b18c925b90bdbfefb9c419934778ecefc068e\"" Sep 4 23:46:19.538565 containerd[1509]: time="2025-09-04T23:46:19.538509060Z" level=info msg="CreateContainer within sandbox \"448a2a55c181f84bd1e2cff608383a227d7e96381af91ab388d28f1879188256\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8ad9882d0c4e983b2cf2b7c3f6f8e5e28467cf0aa7f4f67ec31b502d6884caac\"" Sep 4 23:46:19.539348 containerd[1509]: time="2025-09-04T23:46:19.539317427Z" level=info msg="StartContainer for \"8ad9882d0c4e983b2cf2b7c3f6f8e5e28467cf0aa7f4f67ec31b502d6884caac\"" Sep 4 23:46:19.547087 kubelet[2316]: E0904 23:46:19.547029 2316 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://88.198.151.158:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-2-n-5840999b78?timeout=10s\": dial tcp 88.198.151.158:6443: connect: connection refused" interval="1.6s" Sep 4 23:46:19.556105 systemd[1]: Started cri-containerd-85d0bc11df64fe49968372a55c86726b7dce201cb0d610a87a9728466c58bb04.scope - libcontainer container 85d0bc11df64fe49968372a55c86726b7dce201cb0d610a87a9728466c58bb04. Sep 4 23:46:19.589288 systemd[1]: Started cri-containerd-8f0ac07473351e63fea2bf695b1b18c925b90bdbfefb9c419934778ecefc068e.scope - libcontainer container 8f0ac07473351e63fea2bf695b1b18c925b90bdbfefb9c419934778ecefc068e. Sep 4 23:46:19.597800 systemd[1]: Started cri-containerd-8ad9882d0c4e983b2cf2b7c3f6f8e5e28467cf0aa7f4f67ec31b502d6884caac.scope - libcontainer container 8ad9882d0c4e983b2cf2b7c3f6f8e5e28467cf0aa7f4f67ec31b502d6884caac. Sep 4 23:46:19.623187 containerd[1509]: time="2025-09-04T23:46:19.622997728Z" level=info msg="StartContainer for \"85d0bc11df64fe49968372a55c86726b7dce201cb0d610a87a9728466c58bb04\" returns successfully" Sep 4 23:46:19.661604 containerd[1509]: time="2025-09-04T23:46:19.661142586Z" level=info msg="StartContainer for \"8f0ac07473351e63fea2bf695b1b18c925b90bdbfefb9c419934778ecefc068e\" returns successfully" Sep 4 23:46:19.667364 containerd[1509]: time="2025-09-04T23:46:19.667221080Z" level=info msg="StartContainer for \"8ad9882d0c4e983b2cf2b7c3f6f8e5e28467cf0aa7f4f67ec31b502d6884caac\" returns successfully" Sep 4 23:46:19.741070 kubelet[2316]: I0904 23:46:19.740273 2316 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-2-n-5840999b78" Sep 4 23:46:19.741070 kubelet[2316]: E0904 23:46:19.741014 2316 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://88.198.151.158:6443/api/v1/nodes\": dial tcp 88.198.151.158:6443: connect: connection refused" node="ci-4230-2-2-n-5840999b78" Sep 4 23:46:20.185023 kubelet[2316]: E0904 23:46:20.184749 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-2-n-5840999b78\" not found" node="ci-4230-2-2-n-5840999b78" Sep 4 23:46:20.188001 kubelet[2316]: E0904 23:46:20.187824 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-2-n-5840999b78\" not found" node="ci-4230-2-2-n-5840999b78" Sep 4 23:46:20.188796 kubelet[2316]: E0904 23:46:20.188771 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-2-n-5840999b78\" not found" node="ci-4230-2-2-n-5840999b78" Sep 4 23:46:21.193724 kubelet[2316]: E0904 23:46:21.193495 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-2-n-5840999b78\" not found" node="ci-4230-2-2-n-5840999b78" Sep 4 23:46:21.194912 kubelet[2316]: E0904 23:46:21.194728 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-2-n-5840999b78\" not found" node="ci-4230-2-2-n-5840999b78" Sep 4 23:46:21.345650 kubelet[2316]: I0904 23:46:21.343542 2316 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-2-n-5840999b78" Sep 4 23:46:21.901297 kubelet[2316]: E0904 23:46:21.901248 2316 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230-2-2-n-5840999b78\" not found" node="ci-4230-2-2-n-5840999b78" Sep 4 23:46:22.014229 kubelet[2316]: I0904 23:46:22.013948 2316 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230-2-2-n-5840999b78" Sep 4 23:46:22.041636 kubelet[2316]: I0904 23:46:22.040860 2316 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-2-2-n-5840999b78" Sep 4 23:46:22.059447 kubelet[2316]: E0904 23:46:22.059407 2316 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-2-2-n-5840999b78\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230-2-2-n-5840999b78" Sep 4 23:46:22.059870 kubelet[2316]: I0904 23:46:22.059645 2316 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-2-2-n-5840999b78" Sep 4 23:46:22.067105 kubelet[2316]: E0904 23:46:22.067061 2316 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230-2-2-n-5840999b78\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230-2-2-n-5840999b78" Sep 4 23:46:22.067105 kubelet[2316]: I0904 23:46:22.067098 2316 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-2-2-n-5840999b78" Sep 4 23:46:22.078076 kubelet[2316]: E0904 23:46:22.078018 2316 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-2-2-n-5840999b78\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230-2-2-n-5840999b78" Sep 4 23:46:22.118365 kubelet[2316]: I0904 23:46:22.118100 2316 apiserver.go:52] "Watching apiserver" Sep 4 23:46:22.142863 kubelet[2316]: I0904 23:46:22.142822 2316 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 23:46:22.190754 kubelet[2316]: I0904 23:46:22.190537 2316 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-2-2-n-5840999b78" Sep 4 23:46:22.195613 kubelet[2316]: E0904 23:46:22.195390 2316 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-2-2-n-5840999b78\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230-2-2-n-5840999b78" Sep 4 23:46:23.775068 kubelet[2316]: I0904 23:46:23.775014 2316 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-2-2-n-5840999b78" Sep 4 23:46:23.952937 systemd[1]: Reload requested from client PID 2590 ('systemctl') (unit session-7.scope)... Sep 4 23:46:23.952967 systemd[1]: Reloading... Sep 4 23:46:24.081843 zram_generator::config[2647]: No configuration found. Sep 4 23:46:24.173574 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:46:24.282989 systemd[1]: Reloading finished in 329 ms. Sep 4 23:46:24.307515 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:46:24.320520 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 23:46:24.321213 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:46:24.321460 systemd[1]: kubelet.service: Consumed 1.554s CPU time, 127.6M memory peak. Sep 4 23:46:24.334263 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:46:24.481828 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:46:24.482947 (kubelet)[2680]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 23:46:24.542825 kubelet[2680]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:46:24.543614 kubelet[2680]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 23:46:24.543614 kubelet[2680]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:46:24.543770 kubelet[2680]: I0904 23:46:24.543555 2680 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 23:46:24.557478 kubelet[2680]: I0904 23:46:24.557008 2680 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 4 23:46:24.558112 kubelet[2680]: I0904 23:46:24.557044 2680 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 23:46:24.558815 kubelet[2680]: I0904 23:46:24.558779 2680 server.go:954] "Client rotation is on, will bootstrap in background" Sep 4 23:46:24.561924 kubelet[2680]: I0904 23:46:24.561859 2680 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 23:46:24.564534 kubelet[2680]: I0904 23:46:24.564472 2680 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 23:46:24.569486 kubelet[2680]: E0904 23:46:24.569448 2680 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 4 23:46:24.569486 kubelet[2680]: I0904 23:46:24.569482 2680 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 4 23:46:24.574626 kubelet[2680]: I0904 23:46:24.573755 2680 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 23:46:24.574626 kubelet[2680]: I0904 23:46:24.574008 2680 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 23:46:24.574626 kubelet[2680]: I0904 23:46:24.574050 2680 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-2-2-n-5840999b78","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 23:46:24.574626 kubelet[2680]: I0904 23:46:24.574383 2680 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 23:46:24.574924 kubelet[2680]: I0904 23:46:24.574394 2680 container_manager_linux.go:304] "Creating device plugin manager" Sep 4 23:46:24.574924 kubelet[2680]: I0904 23:46:24.574456 2680 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:46:24.574924 kubelet[2680]: I0904 23:46:24.574641 2680 kubelet.go:446] "Attempting to sync node with API server" Sep 4 23:46:24.574924 kubelet[2680]: I0904 23:46:24.574654 2680 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 23:46:24.574924 kubelet[2680]: I0904 23:46:24.574678 2680 kubelet.go:352] "Adding apiserver pod source" Sep 4 23:46:24.574924 kubelet[2680]: I0904 23:46:24.574699 2680 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 23:46:24.580294 kubelet[2680]: I0904 23:46:24.580251 2680 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 4 23:46:24.580802 kubelet[2680]: I0904 23:46:24.580773 2680 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 23:46:24.581352 kubelet[2680]: I0904 23:46:24.581317 2680 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 23:46:24.581407 kubelet[2680]: I0904 23:46:24.581363 2680 server.go:1287] "Started kubelet" Sep 4 23:46:24.583229 kubelet[2680]: I0904 23:46:24.583165 2680 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 23:46:24.583642 kubelet[2680]: I0904 23:46:24.583622 2680 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 23:46:24.584096 kubelet[2680]: I0904 23:46:24.584058 2680 server.go:479] "Adding debug handlers to kubelet server" Sep 4 23:46:24.589145 kubelet[2680]: I0904 23:46:24.589068 2680 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 23:46:24.589377 kubelet[2680]: I0904 23:46:24.589346 2680 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 23:46:24.590094 kubelet[2680]: I0904 23:46:24.590062 2680 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 23:46:24.592081 kubelet[2680]: I0904 23:46:24.592056 2680 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 23:46:24.592946 kubelet[2680]: E0904 23:46:24.592464 2680 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-2-2-n-5840999b78\" not found" Sep 4 23:46:24.594944 kubelet[2680]: I0904 23:46:24.594914 2680 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 23:46:24.595252 kubelet[2680]: I0904 23:46:24.595235 2680 reconciler.go:26] "Reconciler: start to sync state" Sep 4 23:46:24.597621 kubelet[2680]: I0904 23:46:24.597294 2680 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 23:46:24.602781 kubelet[2680]: I0904 23:46:24.602746 2680 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 23:46:24.604169 kubelet[2680]: I0904 23:46:24.602910 2680 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 4 23:46:24.604169 kubelet[2680]: I0904 23:46:24.602936 2680 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 23:46:24.604169 kubelet[2680]: I0904 23:46:24.602943 2680 kubelet.go:2382] "Starting kubelet main sync loop" Sep 4 23:46:24.604169 kubelet[2680]: E0904 23:46:24.602993 2680 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 23:46:24.616477 kubelet[2680]: I0904 23:46:24.616428 2680 factory.go:221] Registration of the systemd container factory successfully Sep 4 23:46:24.616645 kubelet[2680]: I0904 23:46:24.616563 2680 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 23:46:24.633460 kubelet[2680]: I0904 23:46:24.633233 2680 factory.go:221] Registration of the containerd container factory successfully Sep 4 23:46:24.637766 kubelet[2680]: E0904 23:46:24.637736 2680 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 23:46:24.697395 kubelet[2680]: I0904 23:46:24.697333 2680 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 23:46:24.697395 kubelet[2680]: I0904 23:46:24.697363 2680 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 23:46:24.697395 kubelet[2680]: I0904 23:46:24.697386 2680 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:46:24.697714 kubelet[2680]: I0904 23:46:24.697570 2680 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 23:46:24.697714 kubelet[2680]: I0904 23:46:24.697620 2680 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 23:46:24.697714 kubelet[2680]: I0904 23:46:24.697641 2680 policy_none.go:49] "None policy: Start" Sep 4 23:46:24.697714 kubelet[2680]: I0904 23:46:24.697649 2680 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 23:46:24.697714 kubelet[2680]: I0904 23:46:24.697659 2680 state_mem.go:35] "Initializing new in-memory state store" Sep 4 23:46:24.697925 kubelet[2680]: I0904 23:46:24.697755 2680 state_mem.go:75] "Updated machine memory state" Sep 4 23:46:24.702765 kubelet[2680]: I0904 23:46:24.702720 2680 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 23:46:24.702958 kubelet[2680]: I0904 23:46:24.702916 2680 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 23:46:24.702958 kubelet[2680]: I0904 23:46:24.702936 2680 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 23:46:24.703425 kubelet[2680]: I0904 23:46:24.703386 2680 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 23:46:24.704494 kubelet[2680]: I0904 23:46:24.704454 2680 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-2-2-n-5840999b78" Sep 4 23:46:24.706416 kubelet[2680]: I0904 23:46:24.705053 2680 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-2-2-n-5840999b78" Sep 4 23:46:24.706567 kubelet[2680]: I0904 23:46:24.706495 2680 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-2-2-n-5840999b78" Sep 4 23:46:24.709975 kubelet[2680]: E0904 23:46:24.709933 2680 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 23:46:24.724485 kubelet[2680]: E0904 23:46:24.724386 2680 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230-2-2-n-5840999b78\" already exists" pod="kube-system/kube-controller-manager-ci-4230-2-2-n-5840999b78" Sep 4 23:46:24.797282 kubelet[2680]: I0904 23:46:24.796499 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5b89976acd755330372f010fae06f648-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-2-2-n-5840999b78\" (UID: \"5b89976acd755330372f010fae06f648\") " pod="kube-system/kube-apiserver-ci-4230-2-2-n-5840999b78" Sep 4 23:46:24.797282 kubelet[2680]: I0904 23:46:24.796610 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fbad68ff95b6cf0a4d5c8517337146bd-ca-certs\") pod \"kube-controller-manager-ci-4230-2-2-n-5840999b78\" (UID: \"fbad68ff95b6cf0a4d5c8517337146bd\") " pod="kube-system/kube-controller-manager-ci-4230-2-2-n-5840999b78" Sep 4 23:46:24.797282 kubelet[2680]: I0904 23:46:24.796658 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fbad68ff95b6cf0a4d5c8517337146bd-kubeconfig\") pod \"kube-controller-manager-ci-4230-2-2-n-5840999b78\" (UID: \"fbad68ff95b6cf0a4d5c8517337146bd\") " pod="kube-system/kube-controller-manager-ci-4230-2-2-n-5840999b78" Sep 4 23:46:24.797282 kubelet[2680]: I0904 23:46:24.796693 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3aa8a97e1bb6f856c7437e81d398f333-kubeconfig\") pod \"kube-scheduler-ci-4230-2-2-n-5840999b78\" (UID: \"3aa8a97e1bb6f856c7437e81d398f333\") " pod="kube-system/kube-scheduler-ci-4230-2-2-n-5840999b78" Sep 4 23:46:24.797282 kubelet[2680]: I0904 23:46:24.796731 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5b89976acd755330372f010fae06f648-ca-certs\") pod \"kube-apiserver-ci-4230-2-2-n-5840999b78\" (UID: \"5b89976acd755330372f010fae06f648\") " pod="kube-system/kube-apiserver-ci-4230-2-2-n-5840999b78" Sep 4 23:46:24.797691 kubelet[2680]: I0904 23:46:24.796762 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5b89976acd755330372f010fae06f648-k8s-certs\") pod \"kube-apiserver-ci-4230-2-2-n-5840999b78\" (UID: \"5b89976acd755330372f010fae06f648\") " pod="kube-system/kube-apiserver-ci-4230-2-2-n-5840999b78" Sep 4 23:46:24.797691 kubelet[2680]: I0904 23:46:24.796796 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fbad68ff95b6cf0a4d5c8517337146bd-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-2-2-n-5840999b78\" (UID: \"fbad68ff95b6cf0a4d5c8517337146bd\") " pod="kube-system/kube-controller-manager-ci-4230-2-2-n-5840999b78" Sep 4 23:46:24.797691 kubelet[2680]: I0904 23:46:24.796829 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fbad68ff95b6cf0a4d5c8517337146bd-k8s-certs\") pod \"kube-controller-manager-ci-4230-2-2-n-5840999b78\" (UID: \"fbad68ff95b6cf0a4d5c8517337146bd\") " pod="kube-system/kube-controller-manager-ci-4230-2-2-n-5840999b78" Sep 4 23:46:24.797691 kubelet[2680]: I0904 23:46:24.796865 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fbad68ff95b6cf0a4d5c8517337146bd-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-2-2-n-5840999b78\" (UID: \"fbad68ff95b6cf0a4d5c8517337146bd\") " pod="kube-system/kube-controller-manager-ci-4230-2-2-n-5840999b78" Sep 4 23:46:24.818516 kubelet[2680]: I0904 23:46:24.818210 2680 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-2-n-5840999b78" Sep 4 23:46:24.830079 kubelet[2680]: I0904 23:46:24.829725 2680 kubelet_node_status.go:124] "Node was previously registered" node="ci-4230-2-2-n-5840999b78" Sep 4 23:46:24.830079 kubelet[2680]: I0904 23:46:24.829819 2680 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230-2-2-n-5840999b78" Sep 4 23:46:24.958245 sudo[2714]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 4 23:46:24.958560 sudo[2714]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 4 23:46:25.415075 sudo[2714]: pam_unix(sudo:session): session closed for user root Sep 4 23:46:25.576862 kubelet[2680]: I0904 23:46:25.576785 2680 apiserver.go:52] "Watching apiserver" Sep 4 23:46:25.595861 kubelet[2680]: I0904 23:46:25.595791 2680 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 23:46:25.672736 kubelet[2680]: I0904 23:46:25.671103 2680 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-2-2-n-5840999b78" Sep 4 23:46:25.672736 kubelet[2680]: I0904 23:46:25.671393 2680 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-2-2-n-5840999b78" Sep 4 23:46:25.688089 kubelet[2680]: E0904 23:46:25.687861 2680 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-2-2-n-5840999b78\" already exists" pod="kube-system/kube-scheduler-ci-4230-2-2-n-5840999b78" Sep 4 23:46:25.689873 kubelet[2680]: E0904 23:46:25.689829 2680 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-2-2-n-5840999b78\" already exists" pod="kube-system/kube-apiserver-ci-4230-2-2-n-5840999b78" Sep 4 23:46:25.702604 kubelet[2680]: I0904 23:46:25.702003 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-2-2-n-5840999b78" podStartSLOduration=2.701981423 podStartE2EDuration="2.701981423s" podCreationTimestamp="2025-09-04 23:46:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:46:25.701955383 +0000 UTC m=+1.211736589" watchObservedRunningTime="2025-09-04 23:46:25.701981423 +0000 UTC m=+1.211762629" Sep 4 23:46:25.730043 kubelet[2680]: I0904 23:46:25.729776 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-2-2-n-5840999b78" podStartSLOduration=1.729759123 podStartE2EDuration="1.729759123s" podCreationTimestamp="2025-09-04 23:46:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:46:25.729165559 +0000 UTC m=+1.238946805" watchObservedRunningTime="2025-09-04 23:46:25.729759123 +0000 UTC m=+1.239540289" Sep 4 23:46:25.730043 kubelet[2680]: I0904 23:46:25.729937 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-2-2-n-5840999b78" podStartSLOduration=1.729933004 podStartE2EDuration="1.729933004s" podCreationTimestamp="2025-09-04 23:46:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:46:25.716851599 +0000 UTC m=+1.226632805" watchObservedRunningTime="2025-09-04 23:46:25.729933004 +0000 UTC m=+1.239714210" Sep 4 23:46:27.639441 sudo[1779]: pam_unix(sudo:session): session closed for user root Sep 4 23:46:27.810381 sshd[1778]: Connection closed by 139.178.68.195 port 46182 Sep 4 23:46:27.811118 sshd-session[1776]: pam_unix(sshd:session): session closed for user core Sep 4 23:46:27.816976 systemd[1]: sshd@6-88.198.151.158:22-139.178.68.195:46182.service: Deactivated successfully. Sep 4 23:46:27.822564 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 23:46:27.823260 systemd[1]: session-7.scope: Consumed 7.443s CPU time, 262.3M memory peak. Sep 4 23:46:27.824516 systemd-logind[1483]: Session 7 logged out. Waiting for processes to exit. Sep 4 23:46:27.825529 systemd-logind[1483]: Removed session 7. Sep 4 23:46:31.024604 kubelet[2680]: I0904 23:46:31.024490 2680 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 23:46:31.025948 containerd[1509]: time="2025-09-04T23:46:31.025415321Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 23:46:31.026335 kubelet[2680]: I0904 23:46:31.025791 2680 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 23:46:31.971697 systemd[1]: Created slice kubepods-besteffort-pod44ae2845_e45c_4704_94bf_222ca3f4f587.slice - libcontainer container kubepods-besteffort-pod44ae2845_e45c_4704_94bf_222ca3f4f587.slice. Sep 4 23:46:31.986483 systemd[1]: Created slice kubepods-burstable-pod564d0859_eeb3_48bc_8778_48c331745ed3.slice - libcontainer container kubepods-burstable-pod564d0859_eeb3_48bc_8778_48c331745ed3.slice. Sep 4 23:46:32.049441 kubelet[2680]: I0904 23:46:32.048966 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/564d0859-eeb3-48bc-8778-48c331745ed3-cilium-cgroup\") pod \"cilium-h2mdc\" (UID: \"564d0859-eeb3-48bc-8778-48c331745ed3\") " pod="kube-system/cilium-h2mdc" Sep 4 23:46:32.049441 kubelet[2680]: I0904 23:46:32.049026 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/564d0859-eeb3-48bc-8778-48c331745ed3-cilium-run\") pod \"cilium-h2mdc\" (UID: \"564d0859-eeb3-48bc-8778-48c331745ed3\") " pod="kube-system/cilium-h2mdc" Sep 4 23:46:32.049441 kubelet[2680]: I0904 23:46:32.049050 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/564d0859-eeb3-48bc-8778-48c331745ed3-cni-path\") pod \"cilium-h2mdc\" (UID: \"564d0859-eeb3-48bc-8778-48c331745ed3\") " pod="kube-system/cilium-h2mdc" Sep 4 23:46:32.049441 kubelet[2680]: I0904 23:46:32.049074 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/564d0859-eeb3-48bc-8778-48c331745ed3-host-proc-sys-net\") pod \"cilium-h2mdc\" (UID: \"564d0859-eeb3-48bc-8778-48c331745ed3\") " pod="kube-system/cilium-h2mdc" Sep 4 23:46:32.049441 kubelet[2680]: I0904 23:46:32.049093 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/564d0859-eeb3-48bc-8778-48c331745ed3-host-proc-sys-kernel\") pod \"cilium-h2mdc\" (UID: \"564d0859-eeb3-48bc-8778-48c331745ed3\") " pod="kube-system/cilium-h2mdc" Sep 4 23:46:32.049441 kubelet[2680]: I0904 23:46:32.049109 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44ae2845-e45c-4704-94bf-222ca3f4f587-lib-modules\") pod \"kube-proxy-vn4z7\" (UID: \"44ae2845-e45c-4704-94bf-222ca3f4f587\") " pod="kube-system/kube-proxy-vn4z7" Sep 4 23:46:32.051782 kubelet[2680]: I0904 23:46:32.049143 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/564d0859-eeb3-48bc-8778-48c331745ed3-bpf-maps\") pod \"cilium-h2mdc\" (UID: \"564d0859-eeb3-48bc-8778-48c331745ed3\") " pod="kube-system/cilium-h2mdc" Sep 4 23:46:32.051782 kubelet[2680]: I0904 23:46:32.049164 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/564d0859-eeb3-48bc-8778-48c331745ed3-hostproc\") pod \"cilium-h2mdc\" (UID: \"564d0859-eeb3-48bc-8778-48c331745ed3\") " pod="kube-system/cilium-h2mdc" Sep 4 23:46:32.051782 kubelet[2680]: I0904 23:46:32.049182 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/564d0859-eeb3-48bc-8778-48c331745ed3-xtables-lock\") pod \"cilium-h2mdc\" (UID: \"564d0859-eeb3-48bc-8778-48c331745ed3\") " pod="kube-system/cilium-h2mdc" Sep 4 23:46:32.051782 kubelet[2680]: I0904 23:46:32.049243 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/564d0859-eeb3-48bc-8778-48c331745ed3-cilium-config-path\") pod \"cilium-h2mdc\" (UID: \"564d0859-eeb3-48bc-8778-48c331745ed3\") " pod="kube-system/cilium-h2mdc" Sep 4 23:46:32.051782 kubelet[2680]: I0904 23:46:32.049286 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/564d0859-eeb3-48bc-8778-48c331745ed3-clustermesh-secrets\") pod \"cilium-h2mdc\" (UID: \"564d0859-eeb3-48bc-8778-48c331745ed3\") " pod="kube-system/cilium-h2mdc" Sep 4 23:46:32.051782 kubelet[2680]: I0904 23:46:32.049309 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wptg5\" (UniqueName: \"kubernetes.io/projected/564d0859-eeb3-48bc-8778-48c331745ed3-kube-api-access-wptg5\") pod \"cilium-h2mdc\" (UID: \"564d0859-eeb3-48bc-8778-48c331745ed3\") " pod="kube-system/cilium-h2mdc" Sep 4 23:46:32.051922 kubelet[2680]: I0904 23:46:32.049332 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44ae2845-e45c-4704-94bf-222ca3f4f587-xtables-lock\") pod \"kube-proxy-vn4z7\" (UID: \"44ae2845-e45c-4704-94bf-222ca3f4f587\") " pod="kube-system/kube-proxy-vn4z7" Sep 4 23:46:32.051922 kubelet[2680]: I0904 23:46:32.049350 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc6bw\" (UniqueName: \"kubernetes.io/projected/44ae2845-e45c-4704-94bf-222ca3f4f587-kube-api-access-kc6bw\") pod \"kube-proxy-vn4z7\" (UID: \"44ae2845-e45c-4704-94bf-222ca3f4f587\") " pod="kube-system/kube-proxy-vn4z7" Sep 4 23:46:32.051922 kubelet[2680]: I0904 23:46:32.049367 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/564d0859-eeb3-48bc-8778-48c331745ed3-etc-cni-netd\") pod \"cilium-h2mdc\" (UID: \"564d0859-eeb3-48bc-8778-48c331745ed3\") " pod="kube-system/cilium-h2mdc" Sep 4 23:46:32.051922 kubelet[2680]: I0904 23:46:32.049386 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/564d0859-eeb3-48bc-8778-48c331745ed3-lib-modules\") pod \"cilium-h2mdc\" (UID: \"564d0859-eeb3-48bc-8778-48c331745ed3\") " pod="kube-system/cilium-h2mdc" Sep 4 23:46:32.051922 kubelet[2680]: I0904 23:46:32.049406 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/564d0859-eeb3-48bc-8778-48c331745ed3-hubble-tls\") pod \"cilium-h2mdc\" (UID: \"564d0859-eeb3-48bc-8778-48c331745ed3\") " pod="kube-system/cilium-h2mdc" Sep 4 23:46:32.051922 kubelet[2680]: I0904 23:46:32.049431 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/44ae2845-e45c-4704-94bf-222ca3f4f587-kube-proxy\") pod \"kube-proxy-vn4z7\" (UID: \"44ae2845-e45c-4704-94bf-222ca3f4f587\") " pod="kube-system/kube-proxy-vn4z7" Sep 4 23:46:32.090652 systemd[1]: Created slice kubepods-besteffort-podcb739dfb_81e5_45f9_8050_43caa1416ac8.slice - libcontainer container kubepods-besteffort-podcb739dfb_81e5_45f9_8050_43caa1416ac8.slice. Sep 4 23:46:32.149790 kubelet[2680]: I0904 23:46:32.149672 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cb739dfb-81e5-45f9-8050-43caa1416ac8-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-pt5s8\" (UID: \"cb739dfb-81e5-45f9-8050-43caa1416ac8\") " pod="kube-system/cilium-operator-6c4d7847fc-pt5s8" Sep 4 23:46:32.151806 kubelet[2680]: I0904 23:46:32.150014 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cc7c\" (UniqueName: \"kubernetes.io/projected/cb739dfb-81e5-45f9-8050-43caa1416ac8-kube-api-access-8cc7c\") pod \"cilium-operator-6c4d7847fc-pt5s8\" (UID: \"cb739dfb-81e5-45f9-8050-43caa1416ac8\") " pod="kube-system/cilium-operator-6c4d7847fc-pt5s8" Sep 4 23:46:32.284141 containerd[1509]: time="2025-09-04T23:46:32.284003229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vn4z7,Uid:44ae2845-e45c-4704-94bf-222ca3f4f587,Namespace:kube-system,Attempt:0,}" Sep 4 23:46:32.291765 containerd[1509]: time="2025-09-04T23:46:32.291720465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h2mdc,Uid:564d0859-eeb3-48bc-8778-48c331745ed3,Namespace:kube-system,Attempt:0,}" Sep 4 23:46:32.322056 containerd[1509]: time="2025-09-04T23:46:32.321928005Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:46:32.322255 containerd[1509]: time="2025-09-04T23:46:32.322056646Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:46:32.322255 containerd[1509]: time="2025-09-04T23:46:32.322101086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:32.322312 containerd[1509]: time="2025-09-04T23:46:32.322252327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:32.328619 containerd[1509]: time="2025-09-04T23:46:32.328291875Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:46:32.328619 containerd[1509]: time="2025-09-04T23:46:32.328374555Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:46:32.328619 containerd[1509]: time="2025-09-04T23:46:32.328390795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:32.329677 containerd[1509]: time="2025-09-04T23:46:32.329223639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:32.347926 systemd[1]: Started cri-containerd-577ad24046aa5f0c535b13c50c587874b23815c990775cb92fab103d92951359.scope - libcontainer container 577ad24046aa5f0c535b13c50c587874b23815c990775cb92fab103d92951359. Sep 4 23:46:32.353941 systemd[1]: Started cri-containerd-21543f5fc52b269e28c8054c57d209d712aa5d305090cae815015fa488ad03ae.scope - libcontainer container 21543f5fc52b269e28c8054c57d209d712aa5d305090cae815015fa488ad03ae. Sep 4 23:46:32.381879 containerd[1509]: time="2025-09-04T23:46:32.381729964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vn4z7,Uid:44ae2845-e45c-4704-94bf-222ca3f4f587,Namespace:kube-system,Attempt:0,} returns sandbox id \"577ad24046aa5f0c535b13c50c587874b23815c990775cb92fab103d92951359\"" Sep 4 23:46:32.391007 containerd[1509]: time="2025-09-04T23:46:32.390963567Z" level=info msg="CreateContainer within sandbox \"577ad24046aa5f0c535b13c50c587874b23815c990775cb92fab103d92951359\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 23:46:32.395295 containerd[1509]: time="2025-09-04T23:46:32.395236587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-pt5s8,Uid:cb739dfb-81e5-45f9-8050-43caa1416ac8,Namespace:kube-system,Attempt:0,}" Sep 4 23:46:32.412681 containerd[1509]: time="2025-09-04T23:46:32.412641108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h2mdc,Uid:564d0859-eeb3-48bc-8778-48c331745ed3,Namespace:kube-system,Attempt:0,} returns sandbox id \"21543f5fc52b269e28c8054c57d209d712aa5d305090cae815015fa488ad03ae\"" Sep 4 23:46:32.416243 containerd[1509]: time="2025-09-04T23:46:32.416162564Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 4 23:46:32.428613 containerd[1509]: time="2025-09-04T23:46:32.428523222Z" level=info msg="CreateContainer within sandbox \"577ad24046aa5f0c535b13c50c587874b23815c990775cb92fab103d92951359\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"07c64b13544cf0e62e9482c9275aea0587e3060f49b99cd294ed49ceea226d29\"" Sep 4 23:46:32.430858 containerd[1509]: time="2025-09-04T23:46:32.430798072Z" level=info msg="StartContainer for \"07c64b13544cf0e62e9482c9275aea0587e3060f49b99cd294ed49ceea226d29\"" Sep 4 23:46:32.447814 containerd[1509]: time="2025-09-04T23:46:32.446807267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:46:32.448044 containerd[1509]: time="2025-09-04T23:46:32.447846752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:46:32.448044 containerd[1509]: time="2025-09-04T23:46:32.447867112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:32.448424 containerd[1509]: time="2025-09-04T23:46:32.448221753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:32.467794 systemd[1]: Started cri-containerd-07c64b13544cf0e62e9482c9275aea0587e3060f49b99cd294ed49ceea226d29.scope - libcontainer container 07c64b13544cf0e62e9482c9275aea0587e3060f49b99cd294ed49ceea226d29. Sep 4 23:46:32.478001 systemd[1]: Started cri-containerd-7ec083213e6b401c0e1b664f2dd3649da2dbb4a5783e5b183ea1c0ef6116eaae.scope - libcontainer container 7ec083213e6b401c0e1b664f2dd3649da2dbb4a5783e5b183ea1c0ef6116eaae. Sep 4 23:46:32.525431 containerd[1509]: time="2025-09-04T23:46:32.525326392Z" level=info msg="StartContainer for \"07c64b13544cf0e62e9482c9275aea0587e3060f49b99cd294ed49ceea226d29\" returns successfully" Sep 4 23:46:32.538402 containerd[1509]: time="2025-09-04T23:46:32.538088252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-pt5s8,Uid:cb739dfb-81e5-45f9-8050-43caa1416ac8,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ec083213e6b401c0e1b664f2dd3649da2dbb4a5783e5b183ea1c0ef6116eaae\"" Sep 4 23:46:32.731548 kubelet[2680]: I0904 23:46:32.730390 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vn4z7" podStartSLOduration=1.730339747 podStartE2EDuration="1.730339747s" podCreationTimestamp="2025-09-04 23:46:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:46:32.730283066 +0000 UTC m=+8.240064272" watchObservedRunningTime="2025-09-04 23:46:32.730339747 +0000 UTC m=+8.240120953" Sep 4 23:46:36.320106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2401437677.mount: Deactivated successfully. Sep 4 23:46:37.796137 containerd[1509]: time="2025-09-04T23:46:37.796050751Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:37.797807 containerd[1509]: time="2025-09-04T23:46:37.797731797Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 4 23:46:37.798955 containerd[1509]: time="2025-09-04T23:46:37.798869241Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:37.801092 containerd[1509]: time="2025-09-04T23:46:37.800912009Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.384684685s" Sep 4 23:46:37.801092 containerd[1509]: time="2025-09-04T23:46:37.800964449Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 4 23:46:37.803753 containerd[1509]: time="2025-09-04T23:46:37.803484299Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 4 23:46:37.804737 containerd[1509]: time="2025-09-04T23:46:37.804693663Z" level=info msg="CreateContainer within sandbox \"21543f5fc52b269e28c8054c57d209d712aa5d305090cae815015fa488ad03ae\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 23:46:37.827314 containerd[1509]: time="2025-09-04T23:46:37.827248988Z" level=info msg="CreateContainer within sandbox \"21543f5fc52b269e28c8054c57d209d712aa5d305090cae815015fa488ad03ae\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b19fff5d28e01eb94cca5217013ff184a2933ac0c4d8e3966aae0ab4d15cd87f\"" Sep 4 23:46:37.828292 containerd[1509]: time="2025-09-04T23:46:37.828239072Z" level=info msg="StartContainer for \"b19fff5d28e01eb94cca5217013ff184a2933ac0c4d8e3966aae0ab4d15cd87f\"" Sep 4 23:46:37.861574 systemd[1]: run-containerd-runc-k8s.io-b19fff5d28e01eb94cca5217013ff184a2933ac0c4d8e3966aae0ab4d15cd87f-runc.LgM30p.mount: Deactivated successfully. Sep 4 23:46:37.868864 systemd[1]: Started cri-containerd-b19fff5d28e01eb94cca5217013ff184a2933ac0c4d8e3966aae0ab4d15cd87f.scope - libcontainer container b19fff5d28e01eb94cca5217013ff184a2933ac0c4d8e3966aae0ab4d15cd87f. Sep 4 23:46:37.905968 containerd[1509]: time="2025-09-04T23:46:37.905920805Z" level=info msg="StartContainer for \"b19fff5d28e01eb94cca5217013ff184a2933ac0c4d8e3966aae0ab4d15cd87f\" returns successfully" Sep 4 23:46:37.921879 systemd[1]: cri-containerd-b19fff5d28e01eb94cca5217013ff184a2933ac0c4d8e3966aae0ab4d15cd87f.scope: Deactivated successfully. Sep 4 23:46:38.090970 containerd[1509]: time="2025-09-04T23:46:38.090663889Z" level=info msg="shim disconnected" id=b19fff5d28e01eb94cca5217013ff184a2933ac0c4d8e3966aae0ab4d15cd87f namespace=k8s.io Sep 4 23:46:38.090970 containerd[1509]: time="2025-09-04T23:46:38.090854530Z" level=warning msg="cleaning up after shim disconnected" id=b19fff5d28e01eb94cca5217013ff184a2933ac0c4d8e3966aae0ab4d15cd87f namespace=k8s.io Sep 4 23:46:38.090970 containerd[1509]: time="2025-09-04T23:46:38.090869450Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:46:38.714304 containerd[1509]: time="2025-09-04T23:46:38.714116951Z" level=info msg="CreateContainer within sandbox \"21543f5fc52b269e28c8054c57d209d712aa5d305090cae815015fa488ad03ae\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 23:46:38.735606 containerd[1509]: time="2025-09-04T23:46:38.735511469Z" level=info msg="CreateContainer within sandbox \"21543f5fc52b269e28c8054c57d209d712aa5d305090cae815015fa488ad03ae\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ae762a34da0942730066cc4cc54dce0264279b91895862d985a77548fc87f1e3\"" Sep 4 23:46:38.736936 containerd[1509]: time="2025-09-04T23:46:38.736882674Z" level=info msg="StartContainer for \"ae762a34da0942730066cc4cc54dce0264279b91895862d985a77548fc87f1e3\"" Sep 4 23:46:38.772854 systemd[1]: Started cri-containerd-ae762a34da0942730066cc4cc54dce0264279b91895862d985a77548fc87f1e3.scope - libcontainer container ae762a34da0942730066cc4cc54dce0264279b91895862d985a77548fc87f1e3. Sep 4 23:46:38.803929 containerd[1509]: time="2025-09-04T23:46:38.803783556Z" level=info msg="StartContainer for \"ae762a34da0942730066cc4cc54dce0264279b91895862d985a77548fc87f1e3\" returns successfully" Sep 4 23:46:38.815816 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b19fff5d28e01eb94cca5217013ff184a2933ac0c4d8e3966aae0ab4d15cd87f-rootfs.mount: Deactivated successfully. Sep 4 23:46:38.824343 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 23:46:38.824618 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:46:38.825563 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:46:38.834320 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:46:38.836835 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 4 23:46:38.837425 systemd[1]: cri-containerd-ae762a34da0942730066cc4cc54dce0264279b91895862d985a77548fc87f1e3.scope: Deactivated successfully. Sep 4 23:46:38.864880 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae762a34da0942730066cc4cc54dce0264279b91895862d985a77548fc87f1e3-rootfs.mount: Deactivated successfully. Sep 4 23:46:38.866926 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:46:38.878956 containerd[1509]: time="2025-09-04T23:46:38.878715948Z" level=info msg="shim disconnected" id=ae762a34da0942730066cc4cc54dce0264279b91895862d985a77548fc87f1e3 namespace=k8s.io Sep 4 23:46:38.878956 containerd[1509]: time="2025-09-04T23:46:38.878778469Z" level=warning msg="cleaning up after shim disconnected" id=ae762a34da0942730066cc4cc54dce0264279b91895862d985a77548fc87f1e3 namespace=k8s.io Sep 4 23:46:38.878956 containerd[1509]: time="2025-09-04T23:46:38.878786909Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:46:39.728991 containerd[1509]: time="2025-09-04T23:46:39.728659613Z" level=info msg="CreateContainer within sandbox \"21543f5fc52b269e28c8054c57d209d712aa5d305090cae815015fa488ad03ae\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 23:46:39.752394 containerd[1509]: time="2025-09-04T23:46:39.752338696Z" level=info msg="CreateContainer within sandbox \"21543f5fc52b269e28c8054c57d209d712aa5d305090cae815015fa488ad03ae\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d7d3eee790e7a815355adc95a8452bbc9ee15cf0d8f55202cbc1b0605ee74ced\"" Sep 4 23:46:39.754018 containerd[1509]: time="2025-09-04T23:46:39.753695581Z" level=info msg="StartContainer for \"d7d3eee790e7a815355adc95a8452bbc9ee15cf0d8f55202cbc1b0605ee74ced\"" Sep 4 23:46:39.789812 systemd[1]: Started cri-containerd-d7d3eee790e7a815355adc95a8452bbc9ee15cf0d8f55202cbc1b0605ee74ced.scope - libcontainer container d7d3eee790e7a815355adc95a8452bbc9ee15cf0d8f55202cbc1b0605ee74ced. Sep 4 23:46:39.818119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2336899803.mount: Deactivated successfully. Sep 4 23:46:39.842366 containerd[1509]: time="2025-09-04T23:46:39.841928969Z" level=info msg="StartContainer for \"d7d3eee790e7a815355adc95a8452bbc9ee15cf0d8f55202cbc1b0605ee74ced\" returns successfully" Sep 4 23:46:39.843275 systemd[1]: cri-containerd-d7d3eee790e7a815355adc95a8452bbc9ee15cf0d8f55202cbc1b0605ee74ced.scope: Deactivated successfully. Sep 4 23:46:39.888981 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7d3eee790e7a815355adc95a8452bbc9ee15cf0d8f55202cbc1b0605ee74ced-rootfs.mount: Deactivated successfully. Sep 4 23:46:39.911633 containerd[1509]: time="2025-09-04T23:46:39.911551812Z" level=info msg="shim disconnected" id=d7d3eee790e7a815355adc95a8452bbc9ee15cf0d8f55202cbc1b0605ee74ced namespace=k8s.io Sep 4 23:46:39.911633 containerd[1509]: time="2025-09-04T23:46:39.911629132Z" level=warning msg="cleaning up after shim disconnected" id=d7d3eee790e7a815355adc95a8452bbc9ee15cf0d8f55202cbc1b0605ee74ced namespace=k8s.io Sep 4 23:46:39.911633 containerd[1509]: time="2025-09-04T23:46:39.911640412Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:46:40.065118 containerd[1509]: time="2025-09-04T23:46:40.064960660Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:40.067052 containerd[1509]: time="2025-09-04T23:46:40.065888863Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 4 23:46:40.067944 containerd[1509]: time="2025-09-04T23:46:40.067804509Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:40.071251 containerd[1509]: time="2025-09-04T23:46:40.071177801Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.267646902s" Sep 4 23:46:40.071477 containerd[1509]: time="2025-09-04T23:46:40.071443482Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 4 23:46:40.075094 containerd[1509]: time="2025-09-04T23:46:40.075037654Z" level=info msg="CreateContainer within sandbox \"7ec083213e6b401c0e1b664f2dd3649da2dbb4a5783e5b183ea1c0ef6116eaae\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 4 23:46:40.091396 containerd[1509]: time="2025-09-04T23:46:40.091322028Z" level=info msg="CreateContainer within sandbox \"7ec083213e6b401c0e1b664f2dd3649da2dbb4a5783e5b183ea1c0ef6116eaae\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"08e8e930cbe3b1a4070287eaddf008c6ffcecab842bed600474576a892214141\"" Sep 4 23:46:40.093300 containerd[1509]: time="2025-09-04T23:46:40.092886394Z" level=info msg="StartContainer for \"08e8e930cbe3b1a4070287eaddf008c6ffcecab842bed600474576a892214141\"" Sep 4 23:46:40.132130 systemd[1]: Started cri-containerd-08e8e930cbe3b1a4070287eaddf008c6ffcecab842bed600474576a892214141.scope - libcontainer container 08e8e930cbe3b1a4070287eaddf008c6ffcecab842bed600474576a892214141. Sep 4 23:46:40.164806 containerd[1509]: time="2025-09-04T23:46:40.164332794Z" level=info msg="StartContainer for \"08e8e930cbe3b1a4070287eaddf008c6ffcecab842bed600474576a892214141\" returns successfully" Sep 4 23:46:40.740842 containerd[1509]: time="2025-09-04T23:46:40.740800214Z" level=info msg="CreateContainer within sandbox \"21543f5fc52b269e28c8054c57d209d712aa5d305090cae815015fa488ad03ae\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 23:46:40.761905 containerd[1509]: time="2025-09-04T23:46:40.761743844Z" level=info msg="CreateContainer within sandbox \"21543f5fc52b269e28c8054c57d209d712aa5d305090cae815015fa488ad03ae\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9dce7e03d606277e3dace0904c3c4e0b616c73d3921fc82b2326d6529555e4c2\"" Sep 4 23:46:40.763398 containerd[1509]: time="2025-09-04T23:46:40.762378966Z" level=info msg="StartContainer for \"9dce7e03d606277e3dace0904c3c4e0b616c73d3921fc82b2326d6529555e4c2\"" Sep 4 23:46:40.804812 systemd[1]: Started cri-containerd-9dce7e03d606277e3dace0904c3c4e0b616c73d3921fc82b2326d6529555e4c2.scope - libcontainer container 9dce7e03d606277e3dace0904c3c4e0b616c73d3921fc82b2326d6529555e4c2. Sep 4 23:46:40.888388 containerd[1509]: time="2025-09-04T23:46:40.888106030Z" level=info msg="StartContainer for \"9dce7e03d606277e3dace0904c3c4e0b616c73d3921fc82b2326d6529555e4c2\" returns successfully" Sep 4 23:46:40.895806 systemd[1]: cri-containerd-9dce7e03d606277e3dace0904c3c4e0b616c73d3921fc82b2326d6529555e4c2.scope: Deactivated successfully. Sep 4 23:46:40.928188 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9dce7e03d606277e3dace0904c3c4e0b616c73d3921fc82b2326d6529555e4c2-rootfs.mount: Deactivated successfully. Sep 4 23:46:40.948909 containerd[1509]: time="2025-09-04T23:46:40.948720594Z" level=info msg="shim disconnected" id=9dce7e03d606277e3dace0904c3c4e0b616c73d3921fc82b2326d6529555e4c2 namespace=k8s.io Sep 4 23:46:40.948909 containerd[1509]: time="2025-09-04T23:46:40.948805474Z" level=warning msg="cleaning up after shim disconnected" id=9dce7e03d606277e3dace0904c3c4e0b616c73d3921fc82b2326d6529555e4c2 namespace=k8s.io Sep 4 23:46:40.948909 containerd[1509]: time="2025-09-04T23:46:40.948814274Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:46:40.978957 kubelet[2680]: I0904 23:46:40.978713 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-pt5s8" podStartSLOduration=1.44645343 podStartE2EDuration="8.978692294s" podCreationTimestamp="2025-09-04 23:46:32 +0000 UTC" firstStartedPulling="2025-09-04 23:46:32.540187181 +0000 UTC m=+8.049968387" lastFinishedPulling="2025-09-04 23:46:40.072426045 +0000 UTC m=+15.582207251" observedRunningTime="2025-09-04 23:46:40.86747824 +0000 UTC m=+16.377259486" watchObservedRunningTime="2025-09-04 23:46:40.978692294 +0000 UTC m=+16.488473500" Sep 4 23:46:41.744745 containerd[1509]: time="2025-09-04T23:46:41.744468742Z" level=info msg="CreateContainer within sandbox \"21543f5fc52b269e28c8054c57d209d712aa5d305090cae815015fa488ad03ae\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 23:46:41.765388 containerd[1509]: time="2025-09-04T23:46:41.765323770Z" level=info msg="CreateContainer within sandbox \"21543f5fc52b269e28c8054c57d209d712aa5d305090cae815015fa488ad03ae\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"82856abc69cf21f28b6ef74fa5a94a021cfef58f8e722347fdb00e4e123d7244\"" Sep 4 23:46:41.766463 containerd[1509]: time="2025-09-04T23:46:41.766144893Z" level=info msg="StartContainer for \"82856abc69cf21f28b6ef74fa5a94a021cfef58f8e722347fdb00e4e123d7244\"" Sep 4 23:46:41.810860 systemd[1]: Started cri-containerd-82856abc69cf21f28b6ef74fa5a94a021cfef58f8e722347fdb00e4e123d7244.scope - libcontainer container 82856abc69cf21f28b6ef74fa5a94a021cfef58f8e722347fdb00e4e123d7244. Sep 4 23:46:41.857933 containerd[1509]: time="2025-09-04T23:46:41.855338542Z" level=info msg="StartContainer for \"82856abc69cf21f28b6ef74fa5a94a021cfef58f8e722347fdb00e4e123d7244\" returns successfully" Sep 4 23:46:41.989611 kubelet[2680]: I0904 23:46:41.988900 2680 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 4 23:46:42.044060 systemd[1]: Created slice kubepods-burstable-pod94a1545f_2345_4dbe_9104_2ad39d9c7468.slice - libcontainer container kubepods-burstable-pod94a1545f_2345_4dbe_9104_2ad39d9c7468.slice. Sep 4 23:46:42.053061 systemd[1]: Created slice kubepods-burstable-pod32c89e80_f677_4521_8374_3e6fd90cd239.slice - libcontainer container kubepods-burstable-pod32c89e80_f677_4521_8374_3e6fd90cd239.slice. Sep 4 23:46:42.124953 kubelet[2680]: I0904 23:46:42.124905 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgfqh\" (UniqueName: \"kubernetes.io/projected/94a1545f-2345-4dbe-9104-2ad39d9c7468-kube-api-access-jgfqh\") pod \"coredns-668d6bf9bc-f9stn\" (UID: \"94a1545f-2345-4dbe-9104-2ad39d9c7468\") " pod="kube-system/coredns-668d6bf9bc-f9stn" Sep 4 23:46:42.124953 kubelet[2680]: I0904 23:46:42.124958 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32c89e80-f677-4521-8374-3e6fd90cd239-config-volume\") pod \"coredns-668d6bf9bc-bnt7c\" (UID: \"32c89e80-f677-4521-8374-3e6fd90cd239\") " pod="kube-system/coredns-668d6bf9bc-bnt7c" Sep 4 23:46:42.125140 kubelet[2680]: I0904 23:46:42.124989 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/94a1545f-2345-4dbe-9104-2ad39d9c7468-config-volume\") pod \"coredns-668d6bf9bc-f9stn\" (UID: \"94a1545f-2345-4dbe-9104-2ad39d9c7468\") " pod="kube-system/coredns-668d6bf9bc-f9stn" Sep 4 23:46:42.125140 kubelet[2680]: I0904 23:46:42.125021 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfwb2\" (UniqueName: \"kubernetes.io/projected/32c89e80-f677-4521-8374-3e6fd90cd239-kube-api-access-hfwb2\") pod \"coredns-668d6bf9bc-bnt7c\" (UID: \"32c89e80-f677-4521-8374-3e6fd90cd239\") " pod="kube-system/coredns-668d6bf9bc-bnt7c" Sep 4 23:46:42.351053 containerd[1509]: time="2025-09-04T23:46:42.350926232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-f9stn,Uid:94a1545f-2345-4dbe-9104-2ad39d9c7468,Namespace:kube-system,Attempt:0,}" Sep 4 23:46:42.362263 containerd[1509]: time="2025-09-04T23:46:42.361512825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bnt7c,Uid:32c89e80-f677-4521-8374-3e6fd90cd239,Namespace:kube-system,Attempt:0,}" Sep 4 23:46:42.768267 kubelet[2680]: I0904 23:46:42.767956 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-h2mdc" podStartSLOduration=6.380323281 podStartE2EDuration="11.767877338s" podCreationTimestamp="2025-09-04 23:46:31 +0000 UTC" firstStartedPulling="2025-09-04 23:46:32.414699957 +0000 UTC m=+7.924481163" lastFinishedPulling="2025-09-04 23:46:37.802253974 +0000 UTC m=+13.312035220" observedRunningTime="2025-09-04 23:46:42.767460977 +0000 UTC m=+18.277242223" watchObservedRunningTime="2025-09-04 23:46:42.767877338 +0000 UTC m=+18.277658504" Sep 4 23:46:44.134400 systemd-networkd[1394]: cilium_host: Link UP Sep 4 23:46:44.134540 systemd-networkd[1394]: cilium_net: Link UP Sep 4 23:46:44.136730 systemd-networkd[1394]: cilium_net: Gained carrier Sep 4 23:46:44.137833 systemd-networkd[1394]: cilium_host: Gained carrier Sep 4 23:46:44.138468 systemd-networkd[1394]: cilium_net: Gained IPv6LL Sep 4 23:46:44.140297 systemd-networkd[1394]: cilium_host: Gained IPv6LL Sep 4 23:46:44.256362 systemd-networkd[1394]: cilium_vxlan: Link UP Sep 4 23:46:44.256370 systemd-networkd[1394]: cilium_vxlan: Gained carrier Sep 4 23:46:44.546145 kernel: NET: Registered PF_ALG protocol family Sep 4 23:46:45.287919 systemd-networkd[1394]: lxc_health: Link UP Sep 4 23:46:45.299845 systemd-networkd[1394]: lxc_health: Gained carrier Sep 4 23:46:45.438507 systemd-networkd[1394]: lxcbb200c5920ab: Link UP Sep 4 23:46:45.451649 kernel: eth0: renamed from tmpea35d Sep 4 23:46:45.465012 systemd-networkd[1394]: lxc571e0ae40e78: Link UP Sep 4 23:46:45.469675 kernel: eth0: renamed from tmp616f8 Sep 4 23:46:45.468951 systemd-networkd[1394]: lxcbb200c5920ab: Gained carrier Sep 4 23:46:45.475816 systemd-networkd[1394]: lxc571e0ae40e78: Gained carrier Sep 4 23:46:45.525780 systemd-networkd[1394]: cilium_vxlan: Gained IPv6LL Sep 4 23:46:46.549851 systemd-networkd[1394]: lxc_health: Gained IPv6LL Sep 4 23:46:46.742051 systemd-networkd[1394]: lxcbb200c5920ab: Gained IPv6LL Sep 4 23:46:47.189939 systemd-networkd[1394]: lxc571e0ae40e78: Gained IPv6LL Sep 4 23:46:49.387995 kubelet[2680]: I0904 23:46:49.387856 2680 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 23:46:49.690167 containerd[1509]: time="2025-09-04T23:46:49.686768253Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:46:49.690167 containerd[1509]: time="2025-09-04T23:46:49.686836733Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:46:49.690167 containerd[1509]: time="2025-09-04T23:46:49.686852333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:49.690167 containerd[1509]: time="2025-09-04T23:46:49.686933814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:49.715776 containerd[1509]: time="2025-09-04T23:46:49.714884844Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:46:49.715776 containerd[1509]: time="2025-09-04T23:46:49.714952284Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:46:49.715776 containerd[1509]: time="2025-09-04T23:46:49.714968444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:49.718810 containerd[1509]: time="2025-09-04T23:46:49.717113930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:49.732866 systemd[1]: Started cri-containerd-ea35d1653d191ff6d1046b276b6b27916b83e535482de15ac4b85d9f82c2f9b9.scope - libcontainer container ea35d1653d191ff6d1046b276b6b27916b83e535482de15ac4b85d9f82c2f9b9. Sep 4 23:46:49.747018 systemd[1]: Started cri-containerd-616f8dc3fa554bde46a6b6b8418a9d179fe67df43a216ba54379a90d5999123c.scope - libcontainer container 616f8dc3fa554bde46a6b6b8418a9d179fe67df43a216ba54379a90d5999123c. Sep 4 23:46:49.812323 containerd[1509]: time="2025-09-04T23:46:49.812263410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bnt7c,Uid:32c89e80-f677-4521-8374-3e6fd90cd239,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea35d1653d191ff6d1046b276b6b27916b83e535482de15ac4b85d9f82c2f9b9\"" Sep 4 23:46:49.825217 containerd[1509]: time="2025-09-04T23:46:49.825022242Z" level=info msg="CreateContainer within sandbox \"ea35d1653d191ff6d1046b276b6b27916b83e535482de15ac4b85d9f82c2f9b9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 23:46:49.830753 containerd[1509]: time="2025-09-04T23:46:49.830431656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-f9stn,Uid:94a1545f-2345-4dbe-9104-2ad39d9c7468,Namespace:kube-system,Attempt:0,} returns sandbox id \"616f8dc3fa554bde46a6b6b8418a9d179fe67df43a216ba54379a90d5999123c\"" Sep 4 23:46:49.841400 containerd[1509]: time="2025-09-04T23:46:49.839125958Z" level=info msg="CreateContainer within sandbox \"616f8dc3fa554bde46a6b6b8418a9d179fe67df43a216ba54379a90d5999123c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 23:46:49.854458 containerd[1509]: time="2025-09-04T23:46:49.854385676Z" level=info msg="CreateContainer within sandbox \"ea35d1653d191ff6d1046b276b6b27916b83e535482de15ac4b85d9f82c2f9b9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"73272637739a03b7efa060508b0a01e930af84b7df07834be4071488582ea11a\"" Sep 4 23:46:49.856270 containerd[1509]: time="2025-09-04T23:46:49.856232121Z" level=info msg="StartContainer for \"73272637739a03b7efa060508b0a01e930af84b7df07834be4071488582ea11a\"" Sep 4 23:46:49.871855 containerd[1509]: time="2025-09-04T23:46:49.871649400Z" level=info msg="CreateContainer within sandbox \"616f8dc3fa554bde46a6b6b8418a9d179fe67df43a216ba54379a90d5999123c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"68200389ff12406c232a6325cc2facedf703e60f7044bdc97e4abc3bb22df419\"" Sep 4 23:46:49.874916 containerd[1509]: time="2025-09-04T23:46:49.874877088Z" level=info msg="StartContainer for \"68200389ff12406c232a6325cc2facedf703e60f7044bdc97e4abc3bb22df419\"" Sep 4 23:46:49.890803 systemd[1]: Started cri-containerd-73272637739a03b7efa060508b0a01e930af84b7df07834be4071488582ea11a.scope - libcontainer container 73272637739a03b7efa060508b0a01e930af84b7df07834be4071488582ea11a. Sep 4 23:46:49.921336 systemd[1]: Started cri-containerd-68200389ff12406c232a6325cc2facedf703e60f7044bdc97e4abc3bb22df419.scope - libcontainer container 68200389ff12406c232a6325cc2facedf703e60f7044bdc97e4abc3bb22df419. Sep 4 23:46:49.937987 containerd[1509]: time="2025-09-04T23:46:49.937935727Z" level=info msg="StartContainer for \"73272637739a03b7efa060508b0a01e930af84b7df07834be4071488582ea11a\" returns successfully" Sep 4 23:46:49.964819 containerd[1509]: time="2025-09-04T23:46:49.964718194Z" level=info msg="StartContainer for \"68200389ff12406c232a6325cc2facedf703e60f7044bdc97e4abc3bb22df419\" returns successfully" Sep 4 23:46:50.804282 kubelet[2680]: I0904 23:46:50.802962 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-bnt7c" podStartSLOduration=18.802945656 podStartE2EDuration="18.802945656s" podCreationTimestamp="2025-09-04 23:46:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:46:50.797716243 +0000 UTC m=+26.307497489" watchObservedRunningTime="2025-09-04 23:46:50.802945656 +0000 UTC m=+26.312726862" Sep 4 23:46:50.853149 kubelet[2680]: I0904 23:46:50.853086 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-f9stn" podStartSLOduration=18.853070499 podStartE2EDuration="18.853070499s" podCreationTimestamp="2025-09-04 23:46:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:46:50.852178457 +0000 UTC m=+26.361959623" watchObservedRunningTime="2025-09-04 23:46:50.853070499 +0000 UTC m=+26.362851705" Sep 4 23:48:49.560976 systemd[1]: Started sshd@7-88.198.151.158:22-139.178.68.195:45924.service - OpenSSH per-connection server daemon (139.178.68.195:45924). Sep 4 23:48:50.631911 sshd[4095]: Accepted publickey for core from 139.178.68.195 port 45924 ssh2: RSA SHA256:mO6YHl7qCkvXk9I2QzSjJf9VN7vVqy+ZQWo85qMF4pQ Sep 4 23:48:50.634728 sshd-session[4095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:50.642655 systemd-logind[1483]: New session 8 of user core. Sep 4 23:48:50.648151 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 23:48:51.449659 sshd[4097]: Connection closed by 139.178.68.195 port 45924 Sep 4 23:48:51.450797 sshd-session[4095]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:51.455726 systemd[1]: sshd@7-88.198.151.158:22-139.178.68.195:45924.service: Deactivated successfully. Sep 4 23:48:51.459965 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 23:48:51.461736 systemd-logind[1483]: Session 8 logged out. Waiting for processes to exit. Sep 4 23:48:51.462874 systemd-logind[1483]: Removed session 8. Sep 4 23:48:56.648999 systemd[1]: Started sshd@8-88.198.151.158:22-139.178.68.195:34536.service - OpenSSH per-connection server daemon (139.178.68.195:34536). Sep 4 23:48:57.704883 sshd[4109]: Accepted publickey for core from 139.178.68.195 port 34536 ssh2: RSA SHA256:mO6YHl7qCkvXk9I2QzSjJf9VN7vVqy+ZQWo85qMF4pQ Sep 4 23:48:57.706838 sshd-session[4109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:57.712064 systemd-logind[1483]: New session 9 of user core. Sep 4 23:48:57.717900 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 23:48:58.514853 sshd[4111]: Connection closed by 139.178.68.195 port 34536 Sep 4 23:48:58.517980 sshd-session[4109]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:58.523168 systemd-logind[1483]: Session 9 logged out. Waiting for processes to exit. Sep 4 23:48:58.523691 systemd[1]: sshd@8-88.198.151.158:22-139.178.68.195:34536.service: Deactivated successfully. Sep 4 23:48:58.527560 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 23:48:58.529034 systemd-logind[1483]: Removed session 9. Sep 4 23:49:03.689087 systemd[1]: Started sshd@9-88.198.151.158:22-139.178.68.195:42994.service - OpenSSH per-connection server daemon (139.178.68.195:42994). Sep 4 23:49:04.690810 sshd[4126]: Accepted publickey for core from 139.178.68.195 port 42994 ssh2: RSA SHA256:mO6YHl7qCkvXk9I2QzSjJf9VN7vVqy+ZQWo85qMF4pQ Sep 4 23:49:04.693106 sshd-session[4126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:49:04.698076 systemd-logind[1483]: New session 10 of user core. Sep 4 23:49:04.704861 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 23:49:05.457988 sshd[4128]: Connection closed by 139.178.68.195 port 42994 Sep 4 23:49:05.459252 sshd-session[4126]: pam_unix(sshd:session): session closed for user core Sep 4 23:49:05.463908 systemd-logind[1483]: Session 10 logged out. Waiting for processes to exit. Sep 4 23:49:05.464138 systemd[1]: sshd@9-88.198.151.158:22-139.178.68.195:42994.service: Deactivated successfully. Sep 4 23:49:05.467030 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 23:49:05.471438 systemd-logind[1483]: Removed session 10. Sep 4 23:49:05.636997 systemd[1]: Started sshd@10-88.198.151.158:22-139.178.68.195:42998.service - OpenSSH per-connection server daemon (139.178.68.195:42998). Sep 4 23:49:06.637752 sshd[4140]: Accepted publickey for core from 139.178.68.195 port 42998 ssh2: RSA SHA256:mO6YHl7qCkvXk9I2QzSjJf9VN7vVqy+ZQWo85qMF4pQ Sep 4 23:49:06.640042 sshd-session[4140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:49:06.646735 systemd-logind[1483]: New session 11 of user core. Sep 4 23:49:06.653877 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 23:49:07.475226 sshd[4142]: Connection closed by 139.178.68.195 port 42998 Sep 4 23:49:07.477870 sshd-session[4140]: pam_unix(sshd:session): session closed for user core Sep 4 23:49:07.484636 systemd[1]: sshd@10-88.198.151.158:22-139.178.68.195:42998.service: Deactivated successfully. Sep 4 23:49:07.487031 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 23:49:07.488366 systemd-logind[1483]: Session 11 logged out. Waiting for processes to exit. Sep 4 23:49:07.489509 systemd-logind[1483]: Removed session 11. Sep 4 23:49:07.670111 systemd[1]: Started sshd@11-88.198.151.158:22-139.178.68.195:43002.service - OpenSSH per-connection server daemon (139.178.68.195:43002). Sep 4 23:49:08.733761 sshd[4152]: Accepted publickey for core from 139.178.68.195 port 43002 ssh2: RSA SHA256:mO6YHl7qCkvXk9I2QzSjJf9VN7vVqy+ZQWo85qMF4pQ Sep 4 23:49:08.736644 sshd-session[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:49:08.744264 systemd-logind[1483]: New session 12 of user core. Sep 4 23:49:08.747789 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 23:49:09.557161 sshd[4154]: Connection closed by 139.178.68.195 port 43002 Sep 4 23:49:09.558920 sshd-session[4152]: pam_unix(sshd:session): session closed for user core Sep 4 23:49:09.565900 systemd[1]: sshd@11-88.198.151.158:22-139.178.68.195:43002.service: Deactivated successfully. Sep 4 23:49:09.569228 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 23:49:09.571170 systemd-logind[1483]: Session 12 logged out. Waiting for processes to exit. Sep 4 23:49:09.572570 systemd-logind[1483]: Removed session 12. Sep 4 23:49:14.767650 systemd[1]: Started sshd@12-88.198.151.158:22-139.178.68.195:39880.service - OpenSSH per-connection server daemon (139.178.68.195:39880). Sep 4 23:49:15.830174 sshd[4166]: Accepted publickey for core from 139.178.68.195 port 39880 ssh2: RSA SHA256:mO6YHl7qCkvXk9I2QzSjJf9VN7vVqy+ZQWo85qMF4pQ Sep 4 23:49:15.833144 sshd-session[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:49:15.841008 systemd-logind[1483]: New session 13 of user core. Sep 4 23:49:15.853188 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 23:49:16.656723 sshd[4168]: Connection closed by 139.178.68.195 port 39880 Sep 4 23:49:16.657747 sshd-session[4166]: pam_unix(sshd:session): session closed for user core Sep 4 23:49:16.663526 systemd[1]: sshd@12-88.198.151.158:22-139.178.68.195:39880.service: Deactivated successfully. Sep 4 23:49:16.666293 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 23:49:16.667228 systemd-logind[1483]: Session 13 logged out. Waiting for processes to exit. Sep 4 23:49:16.668284 systemd-logind[1483]: Removed session 13. Sep 4 23:49:16.846941 systemd[1]: Started sshd@13-88.198.151.158:22-139.178.68.195:39882.service - OpenSSH per-connection server daemon (139.178.68.195:39882). Sep 4 23:49:17.909152 sshd[4180]: Accepted publickey for core from 139.178.68.195 port 39882 ssh2: RSA SHA256:mO6YHl7qCkvXk9I2QzSjJf9VN7vVqy+ZQWo85qMF4pQ Sep 4 23:49:17.911488 sshd-session[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:49:17.917075 systemd-logind[1483]: New session 14 of user core. Sep 4 23:49:17.923995 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 23:49:18.752551 sshd[4182]: Connection closed by 139.178.68.195 port 39882 Sep 4 23:49:18.753699 sshd-session[4180]: pam_unix(sshd:session): session closed for user core Sep 4 23:49:18.760468 systemd[1]: sshd@13-88.198.151.158:22-139.178.68.195:39882.service: Deactivated successfully. Sep 4 23:49:18.762739 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 23:49:18.763832 systemd-logind[1483]: Session 14 logged out. Waiting for processes to exit. Sep 4 23:49:18.765743 systemd-logind[1483]: Removed session 14. Sep 4 23:49:18.920007 systemd[1]: Started sshd@14-88.198.151.158:22-139.178.68.195:39884.service - OpenSSH per-connection server daemon (139.178.68.195:39884). Sep 4 23:49:19.922851 sshd[4192]: Accepted publickey for core from 139.178.68.195 port 39884 ssh2: RSA SHA256:mO6YHl7qCkvXk9I2QzSjJf9VN7vVqy+ZQWo85qMF4pQ Sep 4 23:49:19.925097 sshd-session[4192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:49:19.931921 systemd-logind[1483]: New session 15 of user core. Sep 4 23:49:19.934782 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 23:49:21.242255 sshd[4194]: Connection closed by 139.178.68.195 port 39884 Sep 4 23:49:21.244967 sshd-session[4192]: pam_unix(sshd:session): session closed for user core Sep 4 23:49:21.251112 systemd-logind[1483]: Session 15 logged out. Waiting for processes to exit. Sep 4 23:49:21.252371 systemd[1]: sshd@14-88.198.151.158:22-139.178.68.195:39884.service: Deactivated successfully. Sep 4 23:49:21.256278 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 23:49:21.259687 systemd-logind[1483]: Removed session 15. Sep 4 23:49:21.445090 systemd[1]: Started sshd@15-88.198.151.158:22-139.178.68.195:52488.service - OpenSSH per-connection server daemon (139.178.68.195:52488). Sep 4 23:49:22.506950 sshd[4212]: Accepted publickey for core from 139.178.68.195 port 52488 ssh2: RSA SHA256:mO6YHl7qCkvXk9I2QzSjJf9VN7vVqy+ZQWo85qMF4pQ Sep 4 23:49:22.509321 sshd-session[4212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:49:22.514900 systemd-logind[1483]: New session 16 of user core. Sep 4 23:49:22.521826 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 23:49:23.459018 sshd[4214]: Connection closed by 139.178.68.195 port 52488 Sep 4 23:49:23.459952 sshd-session[4212]: pam_unix(sshd:session): session closed for user core Sep 4 23:49:23.464804 systemd-logind[1483]: Session 16 logged out. Waiting for processes to exit. Sep 4 23:49:23.466753 systemd[1]: sshd@15-88.198.151.158:22-139.178.68.195:52488.service: Deactivated successfully. Sep 4 23:49:23.469925 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 23:49:23.472510 systemd-logind[1483]: Removed session 16. Sep 4 23:49:23.630720 systemd[1]: Started sshd@16-88.198.151.158:22-139.178.68.195:52498.service - OpenSSH per-connection server daemon (139.178.68.195:52498). Sep 4 23:49:24.635433 sshd[4224]: Accepted publickey for core from 139.178.68.195 port 52498 ssh2: RSA SHA256:mO6YHl7qCkvXk9I2QzSjJf9VN7vVqy+ZQWo85qMF4pQ Sep 4 23:49:24.637817 sshd-session[4224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:49:24.646890 systemd-logind[1483]: New session 17 of user core. Sep 4 23:49:24.651884 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 23:49:25.404676 sshd[4228]: Connection closed by 139.178.68.195 port 52498 Sep 4 23:49:25.405889 sshd-session[4224]: pam_unix(sshd:session): session closed for user core Sep 4 23:49:25.412242 systemd[1]: sshd@16-88.198.151.158:22-139.178.68.195:52498.service: Deactivated successfully. Sep 4 23:49:25.415194 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 23:49:25.416081 systemd-logind[1483]: Session 17 logged out. Waiting for processes to exit. Sep 4 23:49:25.418988 systemd-logind[1483]: Removed session 17. Sep 4 23:49:30.592141 systemd[1]: Started sshd@17-88.198.151.158:22-139.178.68.195:51480.service - OpenSSH per-connection server daemon (139.178.68.195:51480). Sep 4 23:49:31.593553 sshd[4242]: Accepted publickey for core from 139.178.68.195 port 51480 ssh2: RSA SHA256:mO6YHl7qCkvXk9I2QzSjJf9VN7vVqy+ZQWo85qMF4pQ Sep 4 23:49:31.596895 sshd-session[4242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:49:31.606829 systemd-logind[1483]: New session 18 of user core. Sep 4 23:49:31.613970 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 23:49:32.380304 sshd[4244]: Connection closed by 139.178.68.195 port 51480 Sep 4 23:49:32.381055 sshd-session[4242]: pam_unix(sshd:session): session closed for user core Sep 4 23:49:32.387474 systemd[1]: sshd@17-88.198.151.158:22-139.178.68.195:51480.service: Deactivated successfully. Sep 4 23:49:32.396478 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 23:49:32.403257 systemd-logind[1483]: Session 18 logged out. Waiting for processes to exit. Sep 4 23:49:32.405053 systemd-logind[1483]: Removed session 18. Sep 4 23:49:37.573969 systemd[1]: Started sshd@18-88.198.151.158:22-139.178.68.195:51488.service - OpenSSH per-connection server daemon (139.178.68.195:51488). Sep 4 23:49:38.638048 sshd[4259]: Accepted publickey for core from 139.178.68.195 port 51488 ssh2: RSA SHA256:mO6YHl7qCkvXk9I2QzSjJf9VN7vVqy+ZQWo85qMF4pQ Sep 4 23:49:38.641007 sshd-session[4259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:49:38.646627 systemd-logind[1483]: New session 19 of user core. Sep 4 23:49:38.654442 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 23:49:39.445500 sshd[4261]: Connection closed by 139.178.68.195 port 51488 Sep 4 23:49:39.446416 sshd-session[4259]: pam_unix(sshd:session): session closed for user core Sep 4 23:49:39.451529 systemd[1]: sshd@18-88.198.151.158:22-139.178.68.195:51488.service: Deactivated successfully. Sep 4 23:49:39.454741 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 23:49:39.456058 systemd-logind[1483]: Session 19 logged out. Waiting for processes to exit. Sep 4 23:49:39.457202 systemd-logind[1483]: Removed session 19. Sep 4 23:49:39.626031 systemd[1]: Started sshd@19-88.198.151.158:22-139.178.68.195:51498.service - OpenSSH per-connection server daemon (139.178.68.195:51498). Sep 4 23:49:40.629864 sshd[4272]: Accepted publickey for core from 139.178.68.195 port 51498 ssh2: RSA SHA256:mO6YHl7qCkvXk9I2QzSjJf9VN7vVqy+ZQWo85qMF4pQ Sep 4 23:49:40.631962 sshd-session[4272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:49:40.638018 systemd-logind[1483]: New session 20 of user core. Sep 4 23:49:40.645780 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 23:49:43.336737 containerd[1509]: time="2025-09-04T23:49:43.335438521Z" level=info msg="StopContainer for \"08e8e930cbe3b1a4070287eaddf008c6ffcecab842bed600474576a892214141\" with timeout 30 (s)" Sep 4 23:49:43.337800 containerd[1509]: time="2025-09-04T23:49:43.337766371Z" level=info msg="Stop container \"08e8e930cbe3b1a4070287eaddf008c6ffcecab842bed600474576a892214141\" with signal terminated" Sep 4 23:49:43.362361 systemd[1]: run-containerd-runc-k8s.io-82856abc69cf21f28b6ef74fa5a94a021cfef58f8e722347fdb00e4e123d7244-runc.YMf2vX.mount: Deactivated successfully. Sep 4 23:49:43.375628 systemd[1]: cri-containerd-08e8e930cbe3b1a4070287eaddf008c6ffcecab842bed600474576a892214141.scope: Deactivated successfully. Sep 4 23:49:43.392199 containerd[1509]: time="2025-09-04T23:49:43.391571474Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 23:49:43.399982 containerd[1509]: time="2025-09-04T23:49:43.399932308Z" level=info msg="StopContainer for \"82856abc69cf21f28b6ef74fa5a94a021cfef58f8e722347fdb00e4e123d7244\" with timeout 2 (s)" Sep 4 23:49:43.400532 containerd[1509]: time="2025-09-04T23:49:43.400502631Z" level=info msg="Stop container \"82856abc69cf21f28b6ef74fa5a94a021cfef58f8e722347fdb00e4e123d7244\" with signal terminated" Sep 4 23:49:43.410179 systemd-networkd[1394]: lxc_health: Link DOWN Sep 4 23:49:43.410186 systemd-networkd[1394]: lxc_health: Lost carrier Sep 4 23:49:43.418125 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-08e8e930cbe3b1a4070287eaddf008c6ffcecab842bed600474576a892214141-rootfs.mount: Deactivated successfully. Sep 4 23:49:43.437545 containerd[1509]: time="2025-09-04T23:49:43.437481584Z" level=info msg="shim disconnected" id=08e8e930cbe3b1a4070287eaddf008c6ffcecab842bed600474576a892214141 namespace=k8s.io Sep 4 23:49:43.437862 systemd[1]: cri-containerd-82856abc69cf21f28b6ef74fa5a94a021cfef58f8e722347fdb00e4e123d7244.scope: Deactivated successfully. Sep 4 23:49:43.438301 systemd[1]: cri-containerd-82856abc69cf21f28b6ef74fa5a94a021cfef58f8e722347fdb00e4e123d7244.scope: Consumed 7.680s CPU time, 127.6M memory peak, 128K read from disk, 12.9M written to disk. Sep 4 23:49:43.438688 containerd[1509]: time="2025-09-04T23:49:43.438476348Z" level=warning msg="cleaning up after shim disconnected" id=08e8e930cbe3b1a4070287eaddf008c6ffcecab842bed600474576a892214141 namespace=k8s.io Sep 4 23:49:43.438688 containerd[1509]: time="2025-09-04T23:49:43.438509828Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:49:43.456652 containerd[1509]: time="2025-09-04T23:49:43.456062181Z" level=warning msg="cleanup warnings time=\"2025-09-04T23:49:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 4 23:49:43.463437 containerd[1509]: time="2025-09-04T23:49:43.463387611Z" level=info msg="StopContainer for \"08e8e930cbe3b1a4070287eaddf008c6ffcecab842bed600474576a892214141\" returns successfully" Sep 4 23:49:43.464757 containerd[1509]: time="2025-09-04T23:49:43.464710577Z" level=info msg="StopPodSandbox for \"7ec083213e6b401c0e1b664f2dd3649da2dbb4a5783e5b183ea1c0ef6116eaae\"" Sep 4 23:49:43.464891 containerd[1509]: time="2025-09-04T23:49:43.464769817Z" level=info msg="Container to stop \"08e8e930cbe3b1a4070287eaddf008c6ffcecab842bed600474576a892214141\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:49:43.470339 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7ec083213e6b401c0e1b664f2dd3649da2dbb4a5783e5b183ea1c0ef6116eaae-shm.mount: Deactivated successfully. Sep 4 23:49:43.479510 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82856abc69cf21f28b6ef74fa5a94a021cfef58f8e722347fdb00e4e123d7244-rootfs.mount: Deactivated successfully. Sep 4 23:49:43.481092 systemd[1]: cri-containerd-7ec083213e6b401c0e1b664f2dd3649da2dbb4a5783e5b183ea1c0ef6116eaae.scope: Deactivated successfully. Sep 4 23:49:43.489113 containerd[1509]: time="2025-09-04T23:49:43.488862997Z" level=info msg="shim disconnected" id=82856abc69cf21f28b6ef74fa5a94a021cfef58f8e722347fdb00e4e123d7244 namespace=k8s.io Sep 4 23:49:43.489113 containerd[1509]: time="2025-09-04T23:49:43.488926677Z" level=warning msg="cleaning up after shim disconnected" id=82856abc69cf21f28b6ef74fa5a94a021cfef58f8e722347fdb00e4e123d7244 namespace=k8s.io Sep 4 23:49:43.489113 containerd[1509]: time="2025-09-04T23:49:43.488938037Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:49:43.512433 containerd[1509]: time="2025-09-04T23:49:43.512259934Z" level=info msg="StopContainer for \"82856abc69cf21f28b6ef74fa5a94a021cfef58f8e722347fdb00e4e123d7244\" returns successfully" Sep 4 23:49:43.513800 containerd[1509]: time="2025-09-04T23:49:43.513343138Z" level=info msg="StopPodSandbox for \"21543f5fc52b269e28c8054c57d209d712aa5d305090cae815015fa488ad03ae\"" Sep 4 23:49:43.513800 containerd[1509]: time="2025-09-04T23:49:43.513630579Z" level=info msg="Container to stop \"d7d3eee790e7a815355adc95a8452bbc9ee15cf0d8f55202cbc1b0605ee74ced\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:49:43.513800 containerd[1509]: time="2025-09-04T23:49:43.513645379Z" level=info msg="Container to stop \"9dce7e03d606277e3dace0904c3c4e0b616c73d3921fc82b2326d6529555e4c2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:49:43.513800 containerd[1509]: time="2025-09-04T23:49:43.513654619Z" level=info msg="Container to stop \"b19fff5d28e01eb94cca5217013ff184a2933ac0c4d8e3966aae0ab4d15cd87f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:49:43.514288 containerd[1509]: time="2025-09-04T23:49:43.514023301Z" level=info msg="Container to stop \"ae762a34da0942730066cc4cc54dce0264279b91895862d985a77548fc87f1e3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:49:43.514288 containerd[1509]: time="2025-09-04T23:49:43.514047221Z" level=info msg="Container to stop \"82856abc69cf21f28b6ef74fa5a94a021cfef58f8e722347fdb00e4e123d7244\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:49:43.520016 systemd[1]: cri-containerd-21543f5fc52b269e28c8054c57d209d712aa5d305090cae815015fa488ad03ae.scope: Deactivated successfully. Sep 4 23:49:43.522353 containerd[1509]: time="2025-09-04T23:49:43.522175135Z" level=info msg="shim disconnected" id=7ec083213e6b401c0e1b664f2dd3649da2dbb4a5783e5b183ea1c0ef6116eaae namespace=k8s.io Sep 4 23:49:43.522776 containerd[1509]: time="2025-09-04T23:49:43.522539176Z" level=warning msg="cleaning up after shim disconnected" id=7ec083213e6b401c0e1b664f2dd3649da2dbb4a5783e5b183ea1c0ef6116eaae namespace=k8s.io Sep 4 23:49:43.522776 containerd[1509]: time="2025-09-04T23:49:43.522558296Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:49:43.540141 containerd[1509]: time="2025-09-04T23:49:43.539922408Z" level=info msg="TearDown network for sandbox \"7ec083213e6b401c0e1b664f2dd3649da2dbb4a5783e5b183ea1c0ef6116eaae\" successfully" Sep 4 23:49:43.540141 containerd[1509]: time="2025-09-04T23:49:43.539956928Z" level=info msg="StopPodSandbox for \"7ec083213e6b401c0e1b664f2dd3649da2dbb4a5783e5b183ea1c0ef6116eaae\" returns successfully" Sep 4 23:49:43.563562 containerd[1509]: time="2025-09-04T23:49:43.562799663Z" level=info msg="shim disconnected" id=21543f5fc52b269e28c8054c57d209d712aa5d305090cae815015fa488ad03ae namespace=k8s.io Sep 4 23:49:43.563562 containerd[1509]: time="2025-09-04T23:49:43.562870663Z" level=warning msg="cleaning up after shim disconnected" id=21543f5fc52b269e28c8054c57d209d712aa5d305090cae815015fa488ad03ae namespace=k8s.io Sep 4 23:49:43.563562 containerd[1509]: time="2025-09-04T23:49:43.562878823Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:49:43.578651 containerd[1509]: time="2025-09-04T23:49:43.578277087Z" level=info msg="TearDown network for sandbox \"21543f5fc52b269e28c8054c57d209d712aa5d305090cae815015fa488ad03ae\" successfully" Sep 4 23:49:43.578651 containerd[1509]: time="2025-09-04T23:49:43.578315447Z" level=info msg="StopPodSandbox for \"21543f5fc52b269e28c8054c57d209d712aa5d305090cae815015fa488ad03ae\" returns successfully" Sep 4 23:49:43.579825 kubelet[2680]: I0904 23:49:43.579424 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cb739dfb-81e5-45f9-8050-43caa1416ac8-cilium-config-path\") pod \"cb739dfb-81e5-45f9-8050-43caa1416ac8\" (UID: \"cb739dfb-81e5-45f9-8050-43caa1416ac8\") " Sep 4 23:49:43.579825 kubelet[2680]: I0904 23:49:43.579467 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cc7c\" (UniqueName: \"kubernetes.io/projected/cb739dfb-81e5-45f9-8050-43caa1416ac8-kube-api-access-8cc7c\") pod \"cb739dfb-81e5-45f9-8050-43caa1416ac8\" (UID: \"cb739dfb-81e5-45f9-8050-43caa1416ac8\") " Sep 4 23:49:43.584705 kubelet[2680]: I0904 23:49:43.582692 2680 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb739dfb-81e5-45f9-8050-43caa1416ac8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cb739dfb-81e5-45f9-8050-43caa1416ac8" (UID: "cb739dfb-81e5-45f9-8050-43caa1416ac8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 4 23:49:43.586228 kubelet[2680]: I0904 23:49:43.586139 2680 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb739dfb-81e5-45f9-8050-43caa1416ac8-kube-api-access-8cc7c" (OuterVolumeSpecName: "kube-api-access-8cc7c") pod "cb739dfb-81e5-45f9-8050-43caa1416ac8" (UID: "cb739dfb-81e5-45f9-8050-43caa1416ac8"). InnerVolumeSpecName "kube-api-access-8cc7c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 23:49:43.682611 kubelet[2680]: I0904 23:49:43.680786 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/564d0859-eeb3-48bc-8778-48c331745ed3-cni-path\") pod \"564d0859-eeb3-48bc-8778-48c331745ed3\" (UID: \"564d0859-eeb3-48bc-8778-48c331745ed3\") " Sep 4 23:49:43.682611 kubelet[2680]: I0904 23:49:43.680847 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/564d0859-eeb3-48bc-8778-48c331745ed3-hubble-tls\") pod \"564d0859-eeb3-48bc-8778-48c331745ed3\" (UID: \"564d0859-eeb3-48bc-8778-48c331745ed3\") " Sep 4 23:49:43.682611 kubelet[2680]: I0904 23:49:43.680869 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/564d0859-eeb3-48bc-8778-48c331745ed3-hostproc\") pod \"564d0859-eeb3-48bc-8778-48c331745ed3\" (UID: \"564d0859-eeb3-48bc-8778-48c331745ed3\") " Sep 4 23:49:43.682611 kubelet[2680]: I0904 23:49:43.680915 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/564d0859-eeb3-48bc-8778-48c331745ed3-cilium-config-path\") pod \"564d0859-eeb3-48bc-8778-48c331745ed3\" (UID: \"564d0859-eeb3-48bc-8778-48c331745ed3\") " Sep 4 23:49:43.682611 kubelet[2680]: I0904 23:49:43.680933 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/564d0859-eeb3-48bc-8778-48c331745ed3-lib-modules\") pod \"564d0859-eeb3-48bc-8778-48c331745ed3\" (UID: \"564d0859-eeb3-48bc-8778-48c331745ed3\") " Sep 4 23:49:43.682611 kubelet[2680]: I0904 23:49:43.680949 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/564d0859-eeb3-48bc-8778-48c331745ed3-cilium-cgroup\") pod \"564d0859-eeb3-48bc-8778-48c331745ed3\" (UID: \"564d0859-eeb3-48bc-8778-48c331745ed3\") " Sep 4 23:49:43.682888 kubelet[2680]: I0904 23:49:43.680966 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/564d0859-eeb3-48bc-8778-48c331745ed3-cilium-run\") pod \"564d0859-eeb3-48bc-8778-48c331745ed3\" (UID: \"564d0859-eeb3-48bc-8778-48c331745ed3\") " Sep 4 23:49:43.682888 kubelet[2680]: I0904 23:49:43.680984 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/564d0859-eeb3-48bc-8778-48c331745ed3-host-proc-sys-net\") pod \"564d0859-eeb3-48bc-8778-48c331745ed3\" (UID: \"564d0859-eeb3-48bc-8778-48c331745ed3\") " Sep 4 23:49:43.682888 kubelet[2680]: I0904 23:49:43.681054 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wptg5\" (UniqueName: \"kubernetes.io/projected/564d0859-eeb3-48bc-8778-48c331745ed3-kube-api-access-wptg5\") pod \"564d0859-eeb3-48bc-8778-48c331745ed3\" (UID: \"564d0859-eeb3-48bc-8778-48c331745ed3\") " Sep 4 23:49:43.682888 kubelet[2680]: I0904 23:49:43.681077 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/564d0859-eeb3-48bc-8778-48c331745ed3-xtables-lock\") pod \"564d0859-eeb3-48bc-8778-48c331745ed3\" (UID: \"564d0859-eeb3-48bc-8778-48c331745ed3\") " Sep 4 23:49:43.682888 kubelet[2680]: I0904 23:49:43.681095 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/564d0859-eeb3-48bc-8778-48c331745ed3-clustermesh-secrets\") pod \"564d0859-eeb3-48bc-8778-48c331745ed3\" (UID: \"564d0859-eeb3-48bc-8778-48c331745ed3\") " Sep 4 23:49:43.682888 kubelet[2680]: I0904 23:49:43.681112 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/564d0859-eeb3-48bc-8778-48c331745ed3-bpf-maps\") pod \"564d0859-eeb3-48bc-8778-48c331745ed3\" (UID: \"564d0859-eeb3-48bc-8778-48c331745ed3\") " Sep 4 23:49:43.683068 kubelet[2680]: I0904 23:49:43.681126 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/564d0859-eeb3-48bc-8778-48c331745ed3-etc-cni-netd\") pod \"564d0859-eeb3-48bc-8778-48c331745ed3\" (UID: \"564d0859-eeb3-48bc-8778-48c331745ed3\") " Sep 4 23:49:43.683068 kubelet[2680]: I0904 23:49:43.681143 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/564d0859-eeb3-48bc-8778-48c331745ed3-host-proc-sys-kernel\") pod \"564d0859-eeb3-48bc-8778-48c331745ed3\" (UID: \"564d0859-eeb3-48bc-8778-48c331745ed3\") " Sep 4 23:49:43.683068 kubelet[2680]: I0904 23:49:43.681180 2680 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cb739dfb-81e5-45f9-8050-43caa1416ac8-cilium-config-path\") on node \"ci-4230-2-2-n-5840999b78\" DevicePath \"\"" Sep 4 23:49:43.683068 kubelet[2680]: I0904 23:49:43.681190 2680 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8cc7c\" (UniqueName: \"kubernetes.io/projected/cb739dfb-81e5-45f9-8050-43caa1416ac8-kube-api-access-8cc7c\") on node \"ci-4230-2-2-n-5840999b78\" DevicePath \"\"" Sep 4 23:49:43.683068 kubelet[2680]: I0904 23:49:43.681255 2680 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/564d0859-eeb3-48bc-8778-48c331745ed3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "564d0859-eeb3-48bc-8778-48c331745ed3" (UID: "564d0859-eeb3-48bc-8778-48c331745ed3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:49:43.683068 kubelet[2680]: I0904 23:49:43.681291 2680 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/564d0859-eeb3-48bc-8778-48c331745ed3-cni-path" (OuterVolumeSpecName: "cni-path") pod "564d0859-eeb3-48bc-8778-48c331745ed3" (UID: "564d0859-eeb3-48bc-8778-48c331745ed3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:49:43.683304 kubelet[2680]: I0904 23:49:43.683269 2680 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/564d0859-eeb3-48bc-8778-48c331745ed3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "564d0859-eeb3-48bc-8778-48c331745ed3" (UID: "564d0859-eeb3-48bc-8778-48c331745ed3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:49:43.683388 kubelet[2680]: I0904 23:49:43.683375 2680 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/564d0859-eeb3-48bc-8778-48c331745ed3-hostproc" (OuterVolumeSpecName: "hostproc") pod "564d0859-eeb3-48bc-8778-48c331745ed3" (UID: "564d0859-eeb3-48bc-8778-48c331745ed3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:49:43.687417 kubelet[2680]: I0904 23:49:43.687202 2680 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/564d0859-eeb3-48bc-8778-48c331745ed3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "564d0859-eeb3-48bc-8778-48c331745ed3" (UID: "564d0859-eeb3-48bc-8778-48c331745ed3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:49:43.690652 kubelet[2680]: I0904 23:49:43.687666 2680 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/564d0859-eeb3-48bc-8778-48c331745ed3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "564d0859-eeb3-48bc-8778-48c331745ed3" (UID: "564d0859-eeb3-48bc-8778-48c331745ed3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:49:43.690652 kubelet[2680]: I0904 23:49:43.687684 2680 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/564d0859-eeb3-48bc-8778-48c331745ed3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "564d0859-eeb3-48bc-8778-48c331745ed3" (UID: "564d0859-eeb3-48bc-8778-48c331745ed3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:49:43.690652 kubelet[2680]: I0904 23:49:43.688545 2680 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/564d0859-eeb3-48bc-8778-48c331745ed3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "564d0859-eeb3-48bc-8778-48c331745ed3" (UID: "564d0859-eeb3-48bc-8778-48c331745ed3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 4 23:49:43.691175 kubelet[2680]: I0904 23:49:43.691144 2680 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/564d0859-eeb3-48bc-8778-48c331745ed3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "564d0859-eeb3-48bc-8778-48c331745ed3" (UID: "564d0859-eeb3-48bc-8778-48c331745ed3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:49:43.691576 kubelet[2680]: I0904 23:49:43.691267 2680 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/564d0859-eeb3-48bc-8778-48c331745ed3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "564d0859-eeb3-48bc-8778-48c331745ed3" (UID: "564d0859-eeb3-48bc-8778-48c331745ed3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:49:43.692383 kubelet[2680]: I0904 23:49:43.692355 2680 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/564d0859-eeb3-48bc-8778-48c331745ed3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "564d0859-eeb3-48bc-8778-48c331745ed3" (UID: "564d0859-eeb3-48bc-8778-48c331745ed3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 4 23:49:43.692499 kubelet[2680]: I0904 23:49:43.692485 2680 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/564d0859-eeb3-48bc-8778-48c331745ed3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "564d0859-eeb3-48bc-8778-48c331745ed3" (UID: "564d0859-eeb3-48bc-8778-48c331745ed3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:49:43.692747 kubelet[2680]: I0904 23:49:43.692722 2680 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/564d0859-eeb3-48bc-8778-48c331745ed3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "564d0859-eeb3-48bc-8778-48c331745ed3" (UID: "564d0859-eeb3-48bc-8778-48c331745ed3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 23:49:43.695297 kubelet[2680]: I0904 23:49:43.695259 2680 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/564d0859-eeb3-48bc-8778-48c331745ed3-kube-api-access-wptg5" (OuterVolumeSpecName: "kube-api-access-wptg5") pod "564d0859-eeb3-48bc-8778-48c331745ed3" (UID: "564d0859-eeb3-48bc-8778-48c331745ed3"). InnerVolumeSpecName "kube-api-access-wptg5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 23:49:43.781950 kubelet[2680]: I0904 23:49:43.781891 2680 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wptg5\" (UniqueName: \"kubernetes.io/projected/564d0859-eeb3-48bc-8778-48c331745ed3-kube-api-access-wptg5\") on node \"ci-4230-2-2-n-5840999b78\" DevicePath \"\"" Sep 4 23:49:43.782503 kubelet[2680]: I0904 23:49:43.782247 2680 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/564d0859-eeb3-48bc-8778-48c331745ed3-xtables-lock\") on node \"ci-4230-2-2-n-5840999b78\" DevicePath \"\"" Sep 4 23:49:43.782503 kubelet[2680]: I0904 23:49:43.782282 2680 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/564d0859-eeb3-48bc-8778-48c331745ed3-clustermesh-secrets\") on node \"ci-4230-2-2-n-5840999b78\" DevicePath \"\"" Sep 4 23:49:43.782503 kubelet[2680]: I0904 23:49:43.782301 2680 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/564d0859-eeb3-48bc-8778-48c331745ed3-etc-cni-netd\") on node \"ci-4230-2-2-n-5840999b78\" DevicePath \"\"" Sep 4 23:49:43.782503 kubelet[2680]: I0904 23:49:43.782323 2680 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/564d0859-eeb3-48bc-8778-48c331745ed3-host-proc-sys-kernel\") on node \"ci-4230-2-2-n-5840999b78\" DevicePath \"\"" Sep 4 23:49:43.782503 kubelet[2680]: I0904 23:49:43.782342 2680 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/564d0859-eeb3-48bc-8778-48c331745ed3-bpf-maps\") on node \"ci-4230-2-2-n-5840999b78\" DevicePath \"\"" Sep 4 23:49:43.782503 kubelet[2680]: I0904 23:49:43.782360 2680 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/564d0859-eeb3-48bc-8778-48c331745ed3-cni-path\") on node \"ci-4230-2-2-n-5840999b78\" DevicePath \"\"" Sep 4 23:49:43.782503 kubelet[2680]: I0904 23:49:43.782375 2680 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/564d0859-eeb3-48bc-8778-48c331745ed3-hubble-tls\") on node \"ci-4230-2-2-n-5840999b78\" DevicePath \"\"" Sep 4 23:49:43.782503 kubelet[2680]: I0904 23:49:43.782392 2680 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/564d0859-eeb3-48bc-8778-48c331745ed3-hostproc\") on node \"ci-4230-2-2-n-5840999b78\" DevicePath \"\"" Sep 4 23:49:43.782947 kubelet[2680]: I0904 23:49:43.782409 2680 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/564d0859-eeb3-48bc-8778-48c331745ed3-lib-modules\") on node \"ci-4230-2-2-n-5840999b78\" DevicePath \"\"" Sep 4 23:49:43.782947 kubelet[2680]: I0904 23:49:43.782424 2680 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/564d0859-eeb3-48bc-8778-48c331745ed3-cilium-cgroup\") on node \"ci-4230-2-2-n-5840999b78\" DevicePath \"\"" Sep 4 23:49:43.782947 kubelet[2680]: I0904 23:49:43.782441 2680 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/564d0859-eeb3-48bc-8778-48c331745ed3-cilium-run\") on node \"ci-4230-2-2-n-5840999b78\" DevicePath \"\"" Sep 4 23:49:43.782947 kubelet[2680]: I0904 23:49:43.782457 2680 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/564d0859-eeb3-48bc-8778-48c331745ed3-cilium-config-path\") on node \"ci-4230-2-2-n-5840999b78\" DevicePath \"\"" Sep 4 23:49:43.782947 kubelet[2680]: I0904 23:49:43.782473 2680 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/564d0859-eeb3-48bc-8778-48c331745ed3-host-proc-sys-net\") on node \"ci-4230-2-2-n-5840999b78\" DevicePath \"\"" Sep 4 23:49:44.280611 kubelet[2680]: I0904 23:49:44.280216 2680 scope.go:117] "RemoveContainer" containerID="08e8e930cbe3b1a4070287eaddf008c6ffcecab842bed600474576a892214141" Sep 4 23:49:44.285648 containerd[1509]: time="2025-09-04T23:49:44.284647770Z" level=info msg="RemoveContainer for \"08e8e930cbe3b1a4070287eaddf008c6ffcecab842bed600474576a892214141\"" Sep 4 23:49:44.289081 systemd[1]: Removed slice kubepods-besteffort-podcb739dfb_81e5_45f9_8050_43caa1416ac8.slice - libcontainer container kubepods-besteffort-podcb739dfb_81e5_45f9_8050_43caa1416ac8.slice. Sep 4 23:49:44.295229 containerd[1509]: time="2025-09-04T23:49:44.295086013Z" level=info msg="RemoveContainer for \"08e8e930cbe3b1a4070287eaddf008c6ffcecab842bed600474576a892214141\" returns successfully" Sep 4 23:49:44.298201 kubelet[2680]: I0904 23:49:44.297914 2680 scope.go:117] "RemoveContainer" containerID="08e8e930cbe3b1a4070287eaddf008c6ffcecab842bed600474576a892214141" Sep 4 23:49:44.301711 containerd[1509]: time="2025-09-04T23:49:44.301194278Z" level=error msg="ContainerStatus for \"08e8e930cbe3b1a4070287eaddf008c6ffcecab842bed600474576a892214141\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"08e8e930cbe3b1a4070287eaddf008c6ffcecab842bed600474576a892214141\": not found" Sep 4 23:49:44.303542 kubelet[2680]: E0904 23:49:44.303501 2680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"08e8e930cbe3b1a4070287eaddf008c6ffcecab842bed600474576a892214141\": not found" containerID="08e8e930cbe3b1a4070287eaddf008c6ffcecab842bed600474576a892214141" Sep 4 23:49:44.303917 kubelet[2680]: I0904 23:49:44.303661 2680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"08e8e930cbe3b1a4070287eaddf008c6ffcecab842bed600474576a892214141"} err="failed to get container status \"08e8e930cbe3b1a4070287eaddf008c6ffcecab842bed600474576a892214141\": rpc error: code = NotFound desc = an error occurred when try to find container \"08e8e930cbe3b1a4070287eaddf008c6ffcecab842bed600474576a892214141\": not found" Sep 4 23:49:44.304247 kubelet[2680]: I0904 23:49:44.303976 2680 scope.go:117] "RemoveContainer" containerID="82856abc69cf21f28b6ef74fa5a94a021cfef58f8e722347fdb00e4e123d7244" Sep 4 23:49:44.313886 containerd[1509]: time="2025-09-04T23:49:44.313273448Z" level=info msg="RemoveContainer for \"82856abc69cf21f28b6ef74fa5a94a021cfef58f8e722347fdb00e4e123d7244\"" Sep 4 23:49:44.322452 containerd[1509]: time="2025-09-04T23:49:44.322411846Z" level=info msg="RemoveContainer for \"82856abc69cf21f28b6ef74fa5a94a021cfef58f8e722347fdb00e4e123d7244\" returns successfully" Sep 4 23:49:44.322715 systemd[1]: Removed slice kubepods-burstable-pod564d0859_eeb3_48bc_8778_48c331745ed3.slice - libcontainer container kubepods-burstable-pod564d0859_eeb3_48bc_8778_48c331745ed3.slice. Sep 4 23:49:44.322819 systemd[1]: kubepods-burstable-pod564d0859_eeb3_48bc_8778_48c331745ed3.slice: Consumed 7.775s CPU time, 128.1M memory peak, 128K read from disk, 12.9M written to disk. Sep 4 23:49:44.323382 kubelet[2680]: I0904 23:49:44.323361 2680 scope.go:117] "RemoveContainer" containerID="9dce7e03d606277e3dace0904c3c4e0b616c73d3921fc82b2326d6529555e4c2" Sep 4 23:49:44.325525 containerd[1509]: time="2025-09-04T23:49:44.325483739Z" level=info msg="RemoveContainer for \"9dce7e03d606277e3dace0904c3c4e0b616c73d3921fc82b2326d6529555e4c2\"" Sep 4 23:49:44.329589 containerd[1509]: time="2025-09-04T23:49:44.329535755Z" level=info msg="RemoveContainer for \"9dce7e03d606277e3dace0904c3c4e0b616c73d3921fc82b2326d6529555e4c2\" returns successfully" Sep 4 23:49:44.329995 kubelet[2680]: I0904 23:49:44.329883 2680 scope.go:117] "RemoveContainer" containerID="d7d3eee790e7a815355adc95a8452bbc9ee15cf0d8f55202cbc1b0605ee74ced" Sep 4 23:49:44.332553 containerd[1509]: time="2025-09-04T23:49:44.331405043Z" level=info msg="RemoveContainer for \"d7d3eee790e7a815355adc95a8452bbc9ee15cf0d8f55202cbc1b0605ee74ced\"" Sep 4 23:49:44.336574 containerd[1509]: time="2025-09-04T23:49:44.336533824Z" level=info msg="RemoveContainer for \"d7d3eee790e7a815355adc95a8452bbc9ee15cf0d8f55202cbc1b0605ee74ced\" returns successfully" Sep 4 23:49:44.336974 kubelet[2680]: I0904 23:49:44.336954 2680 scope.go:117] "RemoveContainer" containerID="ae762a34da0942730066cc4cc54dce0264279b91895862d985a77548fc87f1e3" Sep 4 23:49:44.338385 containerd[1509]: time="2025-09-04T23:49:44.338354872Z" level=info msg="RemoveContainer for \"ae762a34da0942730066cc4cc54dce0264279b91895862d985a77548fc87f1e3\"" Sep 4 23:49:44.346598 containerd[1509]: time="2025-09-04T23:49:44.346030344Z" level=info msg="RemoveContainer for \"ae762a34da0942730066cc4cc54dce0264279b91895862d985a77548fc87f1e3\" returns successfully" Sep 4 23:49:44.346829 kubelet[2680]: I0904 23:49:44.346445 2680 scope.go:117] "RemoveContainer" containerID="b19fff5d28e01eb94cca5217013ff184a2933ac0c4d8e3966aae0ab4d15cd87f" Sep 4 23:49:44.346089 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ec083213e6b401c0e1b664f2dd3649da2dbb4a5783e5b183ea1c0ef6116eaae-rootfs.mount: Deactivated successfully. Sep 4 23:49:44.349744 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21543f5fc52b269e28c8054c57d209d712aa5d305090cae815015fa488ad03ae-rootfs.mount: Deactivated successfully. Sep 4 23:49:44.349871 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-21543f5fc52b269e28c8054c57d209d712aa5d305090cae815015fa488ad03ae-shm.mount: Deactivated successfully. Sep 4 23:49:44.350461 containerd[1509]: time="2025-09-04T23:49:44.349746559Z" level=info msg="RemoveContainer for \"b19fff5d28e01eb94cca5217013ff184a2933ac0c4d8e3966aae0ab4d15cd87f\"" Sep 4 23:49:44.349929 systemd[1]: var-lib-kubelet-pods-cb739dfb\x2d81e5\x2d45f9\x2d8050\x2d43caa1416ac8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8cc7c.mount: Deactivated successfully. Sep 4 23:49:44.350020 systemd[1]: var-lib-kubelet-pods-564d0859\x2deeb3\x2d48bc\x2d8778\x2d48c331745ed3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwptg5.mount: Deactivated successfully. Sep 4 23:49:44.350187 systemd[1]: var-lib-kubelet-pods-564d0859\x2deeb3\x2d48bc\x2d8778\x2d48c331745ed3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 4 23:49:44.350254 systemd[1]: var-lib-kubelet-pods-564d0859\x2deeb3\x2d48bc\x2d8778\x2d48c331745ed3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 4 23:49:44.356939 containerd[1509]: time="2025-09-04T23:49:44.356871708Z" level=info msg="RemoveContainer for \"b19fff5d28e01eb94cca5217013ff184a2933ac0c4d8e3966aae0ab4d15cd87f\" returns successfully" Sep 4 23:49:44.357458 kubelet[2680]: I0904 23:49:44.357336 2680 scope.go:117] "RemoveContainer" containerID="82856abc69cf21f28b6ef74fa5a94a021cfef58f8e722347fdb00e4e123d7244" Sep 4 23:49:44.357852 containerd[1509]: time="2025-09-04T23:49:44.357794512Z" level=error msg="ContainerStatus for \"82856abc69cf21f28b6ef74fa5a94a021cfef58f8e722347fdb00e4e123d7244\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"82856abc69cf21f28b6ef74fa5a94a021cfef58f8e722347fdb00e4e123d7244\": not found" Sep 4 23:49:44.358181 kubelet[2680]: E0904 23:49:44.358045 2680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"82856abc69cf21f28b6ef74fa5a94a021cfef58f8e722347fdb00e4e123d7244\": not found" containerID="82856abc69cf21f28b6ef74fa5a94a021cfef58f8e722347fdb00e4e123d7244" Sep 4 23:49:44.358181 kubelet[2680]: I0904 23:49:44.358083 2680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"82856abc69cf21f28b6ef74fa5a94a021cfef58f8e722347fdb00e4e123d7244"} err="failed to get container status \"82856abc69cf21f28b6ef74fa5a94a021cfef58f8e722347fdb00e4e123d7244\": rpc error: code = NotFound desc = an error occurred when try to find container \"82856abc69cf21f28b6ef74fa5a94a021cfef58f8e722347fdb00e4e123d7244\": not found" Sep 4 23:49:44.358181 kubelet[2680]: I0904 23:49:44.358107 2680 scope.go:117] "RemoveContainer" containerID="9dce7e03d606277e3dace0904c3c4e0b616c73d3921fc82b2326d6529555e4c2" Sep 4 23:49:44.358983 containerd[1509]: time="2025-09-04T23:49:44.358576516Z" level=error msg="ContainerStatus for \"9dce7e03d606277e3dace0904c3c4e0b616c73d3921fc82b2326d6529555e4c2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9dce7e03d606277e3dace0904c3c4e0b616c73d3921fc82b2326d6529555e4c2\": not found" Sep 4 23:49:44.359085 kubelet[2680]: E0904 23:49:44.358804 2680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9dce7e03d606277e3dace0904c3c4e0b616c73d3921fc82b2326d6529555e4c2\": not found" containerID="9dce7e03d606277e3dace0904c3c4e0b616c73d3921fc82b2326d6529555e4c2" Sep 4 23:49:44.359085 kubelet[2680]: I0904 23:49:44.358829 2680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9dce7e03d606277e3dace0904c3c4e0b616c73d3921fc82b2326d6529555e4c2"} err="failed to get container status \"9dce7e03d606277e3dace0904c3c4e0b616c73d3921fc82b2326d6529555e4c2\": rpc error: code = NotFound desc = an error occurred when try to find container \"9dce7e03d606277e3dace0904c3c4e0b616c73d3921fc82b2326d6529555e4c2\": not found" Sep 4 23:49:44.359085 kubelet[2680]: I0904 23:49:44.358847 2680 scope.go:117] "RemoveContainer" containerID="d7d3eee790e7a815355adc95a8452bbc9ee15cf0d8f55202cbc1b0605ee74ced" Sep 4 23:49:44.360054 containerd[1509]: time="2025-09-04T23:49:44.359576800Z" level=error msg="ContainerStatus for \"d7d3eee790e7a815355adc95a8452bbc9ee15cf0d8f55202cbc1b0605ee74ced\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d7d3eee790e7a815355adc95a8452bbc9ee15cf0d8f55202cbc1b0605ee74ced\": not found" Sep 4 23:49:44.360138 kubelet[2680]: E0904 23:49:44.359828 2680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d7d3eee790e7a815355adc95a8452bbc9ee15cf0d8f55202cbc1b0605ee74ced\": not found" containerID="d7d3eee790e7a815355adc95a8452bbc9ee15cf0d8f55202cbc1b0605ee74ced" Sep 4 23:49:44.360138 kubelet[2680]: I0904 23:49:44.359850 2680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d7d3eee790e7a815355adc95a8452bbc9ee15cf0d8f55202cbc1b0605ee74ced"} err="failed to get container status \"d7d3eee790e7a815355adc95a8452bbc9ee15cf0d8f55202cbc1b0605ee74ced\": rpc error: code = NotFound desc = an error occurred when try to find container \"d7d3eee790e7a815355adc95a8452bbc9ee15cf0d8f55202cbc1b0605ee74ced\": not found" Sep 4 23:49:44.360138 kubelet[2680]: I0904 23:49:44.359866 2680 scope.go:117] "RemoveContainer" containerID="ae762a34da0942730066cc4cc54dce0264279b91895862d985a77548fc87f1e3" Sep 4 23:49:44.360793 containerd[1509]: time="2025-09-04T23:49:44.360363683Z" level=error msg="ContainerStatus for \"ae762a34da0942730066cc4cc54dce0264279b91895862d985a77548fc87f1e3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ae762a34da0942730066cc4cc54dce0264279b91895862d985a77548fc87f1e3\": not found" Sep 4 23:49:44.360845 kubelet[2680]: E0904 23:49:44.360633 2680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ae762a34da0942730066cc4cc54dce0264279b91895862d985a77548fc87f1e3\": not found" containerID="ae762a34da0942730066cc4cc54dce0264279b91895862d985a77548fc87f1e3" Sep 4 23:49:44.360845 kubelet[2680]: I0904 23:49:44.360653 2680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ae762a34da0942730066cc4cc54dce0264279b91895862d985a77548fc87f1e3"} err="failed to get container status \"ae762a34da0942730066cc4cc54dce0264279b91895862d985a77548fc87f1e3\": rpc error: code = NotFound desc = an error occurred when try to find container \"ae762a34da0942730066cc4cc54dce0264279b91895862d985a77548fc87f1e3\": not found" Sep 4 23:49:44.360845 kubelet[2680]: I0904 23:49:44.360669 2680 scope.go:117] "RemoveContainer" containerID="b19fff5d28e01eb94cca5217013ff184a2933ac0c4d8e3966aae0ab4d15cd87f" Sep 4 23:49:44.361597 containerd[1509]: time="2025-09-04T23:49:44.361346927Z" level=error msg="ContainerStatus for \"b19fff5d28e01eb94cca5217013ff184a2933ac0c4d8e3966aae0ab4d15cd87f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b19fff5d28e01eb94cca5217013ff184a2933ac0c4d8e3966aae0ab4d15cd87f\": not found" Sep 4 23:49:44.361661 kubelet[2680]: E0904 23:49:44.361541 2680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b19fff5d28e01eb94cca5217013ff184a2933ac0c4d8e3966aae0ab4d15cd87f\": not found" containerID="b19fff5d28e01eb94cca5217013ff184a2933ac0c4d8e3966aae0ab4d15cd87f" Sep 4 23:49:44.361661 kubelet[2680]: I0904 23:49:44.361562 2680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b19fff5d28e01eb94cca5217013ff184a2933ac0c4d8e3966aae0ab4d15cd87f"} err="failed to get container status \"b19fff5d28e01eb94cca5217013ff184a2933ac0c4d8e3966aae0ab4d15cd87f\": rpc error: code = NotFound desc = an error occurred when try to find container \"b19fff5d28e01eb94cca5217013ff184a2933ac0c4d8e3966aae0ab4d15cd87f\": not found" Sep 4 23:49:44.610145 kubelet[2680]: I0904 23:49:44.609569 2680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="564d0859-eeb3-48bc-8778-48c331745ed3" path="/var/lib/kubelet/pods/564d0859-eeb3-48bc-8778-48c331745ed3/volumes" Sep 4 23:49:44.615210 kubelet[2680]: I0904 23:49:44.613279 2680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb739dfb-81e5-45f9-8050-43caa1416ac8" path="/var/lib/kubelet/pods/cb739dfb-81e5-45f9-8050-43caa1416ac8/volumes" Sep 4 23:49:44.786607 kubelet[2680]: E0904 23:49:44.786143 2680 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 23:49:45.421470 sshd[4274]: Connection closed by 139.178.68.195 port 51498 Sep 4 23:49:45.422684 sshd-session[4272]: pam_unix(sshd:session): session closed for user core Sep 4 23:49:45.429986 systemd[1]: sshd@19-88.198.151.158:22-139.178.68.195:51498.service: Deactivated successfully. Sep 4 23:49:45.434175 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 23:49:45.434612 systemd[1]: session-20.scope: Consumed 1.529s CPU time, 23.5M memory peak. Sep 4 23:49:45.438419 systemd-logind[1483]: Session 20 logged out. Waiting for processes to exit. Sep 4 23:49:45.440250 systemd-logind[1483]: Removed session 20. Sep 4 23:49:45.602873 systemd[1]: Started sshd@20-88.198.151.158:22-139.178.68.195:38144.service - OpenSSH per-connection server daemon (139.178.68.195:38144). Sep 4 23:49:46.612776 sshd[4434]: Accepted publickey for core from 139.178.68.195 port 38144 ssh2: RSA SHA256:mO6YHl7qCkvXk9I2QzSjJf9VN7vVqy+ZQWo85qMF4pQ Sep 4 23:49:46.616427 sshd-session[4434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:49:46.627256 systemd-logind[1483]: New session 21 of user core. Sep 4 23:49:46.636926 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 23:49:48.304646 kubelet[2680]: I0904 23:49:48.303547 2680 memory_manager.go:355] "RemoveStaleState removing state" podUID="cb739dfb-81e5-45f9-8050-43caa1416ac8" containerName="cilium-operator" Sep 4 23:49:48.304646 kubelet[2680]: I0904 23:49:48.303616 2680 memory_manager.go:355] "RemoveStaleState removing state" podUID="564d0859-eeb3-48bc-8778-48c331745ed3" containerName="cilium-agent" Sep 4 23:49:48.315798 systemd[1]: Created slice kubepods-burstable-pod6585d7c9_4071_4841_bc1c_c6f35f377afb.slice - libcontainer container kubepods-burstable-pod6585d7c9_4071_4841_bc1c_c6f35f377afb.slice. Sep 4 23:49:48.416679 kubelet[2680]: I0904 23:49:48.416392 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6585d7c9-4071-4841-bc1c-c6f35f377afb-bpf-maps\") pod \"cilium-xdqnm\" (UID: \"6585d7c9-4071-4841-bc1c-c6f35f377afb\") " pod="kube-system/cilium-xdqnm" Sep 4 23:49:48.416679 kubelet[2680]: I0904 23:49:48.416453 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6585d7c9-4071-4841-bc1c-c6f35f377afb-cni-path\") pod \"cilium-xdqnm\" (UID: \"6585d7c9-4071-4841-bc1c-c6f35f377afb\") " pod="kube-system/cilium-xdqnm" Sep 4 23:49:48.416679 kubelet[2680]: I0904 23:49:48.416478 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6585d7c9-4071-4841-bc1c-c6f35f377afb-lib-modules\") pod \"cilium-xdqnm\" (UID: \"6585d7c9-4071-4841-bc1c-c6f35f377afb\") " pod="kube-system/cilium-xdqnm" Sep 4 23:49:48.416679 kubelet[2680]: I0904 23:49:48.416517 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9crpb\" (UniqueName: \"kubernetes.io/projected/6585d7c9-4071-4841-bc1c-c6f35f377afb-kube-api-access-9crpb\") pod \"cilium-xdqnm\" (UID: \"6585d7c9-4071-4841-bc1c-c6f35f377afb\") " pod="kube-system/cilium-xdqnm" Sep 4 23:49:48.416679 kubelet[2680]: I0904 23:49:48.416547 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6585d7c9-4071-4841-bc1c-c6f35f377afb-xtables-lock\") pod \"cilium-xdqnm\" (UID: \"6585d7c9-4071-4841-bc1c-c6f35f377afb\") " pod="kube-system/cilium-xdqnm" Sep 4 23:49:48.416679 kubelet[2680]: I0904 23:49:48.416572 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6585d7c9-4071-4841-bc1c-c6f35f377afb-hostproc\") pod \"cilium-xdqnm\" (UID: \"6585d7c9-4071-4841-bc1c-c6f35f377afb\") " pod="kube-system/cilium-xdqnm" Sep 4 23:49:48.417223 kubelet[2680]: I0904 23:49:48.416639 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6585d7c9-4071-4841-bc1c-c6f35f377afb-cilium-config-path\") pod \"cilium-xdqnm\" (UID: \"6585d7c9-4071-4841-bc1c-c6f35f377afb\") " pod="kube-system/cilium-xdqnm" Sep 4 23:49:48.417223 kubelet[2680]: I0904 23:49:48.416666 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6585d7c9-4071-4841-bc1c-c6f35f377afb-hubble-tls\") pod \"cilium-xdqnm\" (UID: \"6585d7c9-4071-4841-bc1c-c6f35f377afb\") " pod="kube-system/cilium-xdqnm" Sep 4 23:49:48.417223 kubelet[2680]: I0904 23:49:48.416689 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6585d7c9-4071-4841-bc1c-c6f35f377afb-clustermesh-secrets\") pod \"cilium-xdqnm\" (UID: \"6585d7c9-4071-4841-bc1c-c6f35f377afb\") " pod="kube-system/cilium-xdqnm" Sep 4 23:49:48.417223 kubelet[2680]: I0904 23:49:48.416710 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6585d7c9-4071-4841-bc1c-c6f35f377afb-etc-cni-netd\") pod \"cilium-xdqnm\" (UID: \"6585d7c9-4071-4841-bc1c-c6f35f377afb\") " pod="kube-system/cilium-xdqnm" Sep 4 23:49:48.417223 kubelet[2680]: I0904 23:49:48.416736 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6585d7c9-4071-4841-bc1c-c6f35f377afb-cilium-run\") pod \"cilium-xdqnm\" (UID: \"6585d7c9-4071-4841-bc1c-c6f35f377afb\") " pod="kube-system/cilium-xdqnm" Sep 4 23:49:48.417223 kubelet[2680]: I0904 23:49:48.416758 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6585d7c9-4071-4841-bc1c-c6f35f377afb-cilium-cgroup\") pod \"cilium-xdqnm\" (UID: \"6585d7c9-4071-4841-bc1c-c6f35f377afb\") " pod="kube-system/cilium-xdqnm" Sep 4 23:49:48.417529 kubelet[2680]: I0904 23:49:48.416780 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6585d7c9-4071-4841-bc1c-c6f35f377afb-cilium-ipsec-secrets\") pod \"cilium-xdqnm\" (UID: \"6585d7c9-4071-4841-bc1c-c6f35f377afb\") " pod="kube-system/cilium-xdqnm" Sep 4 23:49:48.417529 kubelet[2680]: I0904 23:49:48.416803 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6585d7c9-4071-4841-bc1c-c6f35f377afb-host-proc-sys-net\") pod \"cilium-xdqnm\" (UID: \"6585d7c9-4071-4841-bc1c-c6f35f377afb\") " pod="kube-system/cilium-xdqnm" Sep 4 23:49:48.417529 kubelet[2680]: I0904 23:49:48.416870 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6585d7c9-4071-4841-bc1c-c6f35f377afb-host-proc-sys-kernel\") pod \"cilium-xdqnm\" (UID: \"6585d7c9-4071-4841-bc1c-c6f35f377afb\") " pod="kube-system/cilium-xdqnm" Sep 4 23:49:48.505434 sshd[4436]: Connection closed by 139.178.68.195 port 38144 Sep 4 23:49:48.506072 sshd-session[4434]: pam_unix(sshd:session): session closed for user core Sep 4 23:49:48.511851 systemd-logind[1483]: Session 21 logged out. Waiting for processes to exit. Sep 4 23:49:48.512727 systemd[1]: sshd@20-88.198.151.158:22-139.178.68.195:38144.service: Deactivated successfully. Sep 4 23:49:48.515851 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 23:49:48.516413 systemd[1]: session-21.scope: Consumed 1.079s CPU time, 24.3M memory peak. Sep 4 23:49:48.518208 systemd-logind[1483]: Removed session 21. Sep 4 23:49:48.626347 containerd[1509]: time="2025-09-04T23:49:48.626154033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xdqnm,Uid:6585d7c9-4071-4841-bc1c-c6f35f377afb,Namespace:kube-system,Attempt:0,}" Sep 4 23:49:48.653549 containerd[1509]: time="2025-09-04T23:49:48.652773422Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:49:48.653549 containerd[1509]: time="2025-09-04T23:49:48.652861223Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:49:48.653549 containerd[1509]: time="2025-09-04T23:49:48.652897063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:49:48.653549 containerd[1509]: time="2025-09-04T23:49:48.652999663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:49:48.677861 systemd[1]: Started cri-containerd-893d3c45cc4b4aabae072585ed9f0967ce02cff04d07dd63e98ec0fe1e74a692.scope - libcontainer container 893d3c45cc4b4aabae072585ed9f0967ce02cff04d07dd63e98ec0fe1e74a692. Sep 4 23:49:48.708116 systemd[1]: Started sshd@21-88.198.151.158:22-139.178.68.195:38156.service - OpenSSH per-connection server daemon (139.178.68.195:38156). Sep 4 23:49:48.737387 containerd[1509]: time="2025-09-04T23:49:48.737347330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xdqnm,Uid:6585d7c9-4071-4841-bc1c-c6f35f377afb,Namespace:kube-system,Attempt:0,} returns sandbox id \"893d3c45cc4b4aabae072585ed9f0967ce02cff04d07dd63e98ec0fe1e74a692\"" Sep 4 23:49:48.741017 containerd[1509]: time="2025-09-04T23:49:48.740869465Z" level=info msg="CreateContainer within sandbox \"893d3c45cc4b4aabae072585ed9f0967ce02cff04d07dd63e98ec0fe1e74a692\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 23:49:48.752544 containerd[1509]: time="2025-09-04T23:49:48.752468632Z" level=info msg="CreateContainer within sandbox \"893d3c45cc4b4aabae072585ed9f0967ce02cff04d07dd63e98ec0fe1e74a692\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"384bd2a433989aaa5dba218dcd7a45d0752815785c3e881ce8fcda2b648fb6c8\"" Sep 4 23:49:48.753147 containerd[1509]: time="2025-09-04T23:49:48.753032075Z" level=info msg="StartContainer for \"384bd2a433989aaa5dba218dcd7a45d0752815785c3e881ce8fcda2b648fb6c8\"" Sep 4 23:49:48.790833 systemd[1]: Started cri-containerd-384bd2a433989aaa5dba218dcd7a45d0752815785c3e881ce8fcda2b648fb6c8.scope - libcontainer container 384bd2a433989aaa5dba218dcd7a45d0752815785c3e881ce8fcda2b648fb6c8. Sep 4 23:49:48.811373 kubelet[2680]: I0904 23:49:48.810974 2680 setters.go:602] "Node became not ready" node="ci-4230-2-2-n-5840999b78" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-04T23:49:48Z","lastTransitionTime":"2025-09-04T23:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 4 23:49:48.824837 containerd[1509]: time="2025-09-04T23:49:48.824497608Z" level=info msg="StartContainer for \"384bd2a433989aaa5dba218dcd7a45d0752815785c3e881ce8fcda2b648fb6c8\" returns successfully" Sep 4 23:49:48.845039 systemd[1]: cri-containerd-384bd2a433989aaa5dba218dcd7a45d0752815785c3e881ce8fcda2b648fb6c8.scope: Deactivated successfully. Sep 4 23:49:48.882033 containerd[1509]: time="2025-09-04T23:49:48.881654444Z" level=info msg="shim disconnected" id=384bd2a433989aaa5dba218dcd7a45d0752815785c3e881ce8fcda2b648fb6c8 namespace=k8s.io Sep 4 23:49:48.882033 containerd[1509]: time="2025-09-04T23:49:48.881727484Z" level=warning msg="cleaning up after shim disconnected" id=384bd2a433989aaa5dba218dcd7a45d0752815785c3e881ce8fcda2b648fb6c8 namespace=k8s.io Sep 4 23:49:48.882033 containerd[1509]: time="2025-09-04T23:49:48.881740164Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:49:49.332637 containerd[1509]: time="2025-09-04T23:49:49.332566856Z" level=info msg="CreateContainer within sandbox \"893d3c45cc4b4aabae072585ed9f0967ce02cff04d07dd63e98ec0fe1e74a692\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 23:49:49.353269 containerd[1509]: time="2025-09-04T23:49:49.353211181Z" level=info msg="CreateContainer within sandbox \"893d3c45cc4b4aabae072585ed9f0967ce02cff04d07dd63e98ec0fe1e74a692\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3887ecb4c77041c166e8f53530b62fbd46de5ce5b20659b3b58297153464237d\"" Sep 4 23:49:49.356013 containerd[1509]: time="2025-09-04T23:49:49.355953152Z" level=info msg="StartContainer for \"3887ecb4c77041c166e8f53530b62fbd46de5ce5b20659b3b58297153464237d\"" Sep 4 23:49:49.399974 systemd[1]: Started cri-containerd-3887ecb4c77041c166e8f53530b62fbd46de5ce5b20659b3b58297153464237d.scope - libcontainer container 3887ecb4c77041c166e8f53530b62fbd46de5ce5b20659b3b58297153464237d. Sep 4 23:49:49.437679 containerd[1509]: time="2025-09-04T23:49:49.437566048Z" level=info msg="StartContainer for \"3887ecb4c77041c166e8f53530b62fbd46de5ce5b20659b3b58297153464237d\" returns successfully" Sep 4 23:49:49.445819 systemd[1]: cri-containerd-3887ecb4c77041c166e8f53530b62fbd46de5ce5b20659b3b58297153464237d.scope: Deactivated successfully. Sep 4 23:49:49.473959 containerd[1509]: time="2025-09-04T23:49:49.473570316Z" level=info msg="shim disconnected" id=3887ecb4c77041c166e8f53530b62fbd46de5ce5b20659b3b58297153464237d namespace=k8s.io Sep 4 23:49:49.473959 containerd[1509]: time="2025-09-04T23:49:49.473759916Z" level=warning msg="cleaning up after shim disconnected" id=3887ecb4c77041c166e8f53530b62fbd46de5ce5b20659b3b58297153464237d namespace=k8s.io Sep 4 23:49:49.473959 containerd[1509]: time="2025-09-04T23:49:49.473789996Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:49:49.771026 sshd[4486]: Accepted publickey for core from 139.178.68.195 port 38156 ssh2: RSA SHA256:mO6YHl7qCkvXk9I2QzSjJf9VN7vVqy+ZQWo85qMF4pQ Sep 4 23:49:49.773570 sshd-session[4486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:49:49.778683 systemd-logind[1483]: New session 22 of user core. Sep 4 23:49:49.785822 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 23:49:49.788857 kubelet[2680]: E0904 23:49:49.787859 2680 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 23:49:50.337073 containerd[1509]: time="2025-09-04T23:49:50.336858220Z" level=info msg="CreateContainer within sandbox \"893d3c45cc4b4aabae072585ed9f0967ce02cff04d07dd63e98ec0fe1e74a692\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 23:49:50.357731 containerd[1509]: time="2025-09-04T23:49:50.356393980Z" level=info msg="CreateContainer within sandbox \"893d3c45cc4b4aabae072585ed9f0967ce02cff04d07dd63e98ec0fe1e74a692\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b3f75bd774f23151da6625a34e8d7e9aeb89c00f8461da93185954ff6c3d4e76\"" Sep 4 23:49:50.357731 containerd[1509]: time="2025-09-04T23:49:50.357147743Z" level=info msg="StartContainer for \"b3f75bd774f23151da6625a34e8d7e9aeb89c00f8461da93185954ff6c3d4e76\"" Sep 4 23:49:50.406856 systemd[1]: Started cri-containerd-b3f75bd774f23151da6625a34e8d7e9aeb89c00f8461da93185954ff6c3d4e76.scope - libcontainer container b3f75bd774f23151da6625a34e8d7e9aeb89c00f8461da93185954ff6c3d4e76. Sep 4 23:49:50.446610 containerd[1509]: time="2025-09-04T23:49:50.445216425Z" level=info msg="StartContainer for \"b3f75bd774f23151da6625a34e8d7e9aeb89c00f8461da93185954ff6c3d4e76\" returns successfully" Sep 4 23:49:50.453460 systemd[1]: cri-containerd-b3f75bd774f23151da6625a34e8d7e9aeb89c00f8461da93185954ff6c3d4e76.scope: Deactivated successfully. Sep 4 23:49:50.480607 containerd[1509]: time="2025-09-04T23:49:50.480370649Z" level=info msg="shim disconnected" id=b3f75bd774f23151da6625a34e8d7e9aeb89c00f8461da93185954ff6c3d4e76 namespace=k8s.io Sep 4 23:49:50.480607 containerd[1509]: time="2025-09-04T23:49:50.480435489Z" level=warning msg="cleaning up after shim disconnected" id=b3f75bd774f23151da6625a34e8d7e9aeb89c00f8461da93185954ff6c3d4e76 namespace=k8s.io Sep 4 23:49:50.480607 containerd[1509]: time="2025-09-04T23:49:50.480443529Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:49:50.495301 sshd[4620]: Connection closed by 139.178.68.195 port 38156 Sep 4 23:49:50.493771 sshd-session[4486]: pam_unix(sshd:session): session closed for user core Sep 4 23:49:50.498435 systemd-logind[1483]: Session 22 logged out. Waiting for processes to exit. Sep 4 23:49:50.499552 systemd[1]: sshd@21-88.198.151.158:22-139.178.68.195:38156.service: Deactivated successfully. Sep 4 23:49:50.501989 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 23:49:50.504937 systemd-logind[1483]: Removed session 22. Sep 4 23:49:50.528173 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b3f75bd774f23151da6625a34e8d7e9aeb89c00f8461da93185954ff6c3d4e76-rootfs.mount: Deactivated successfully. Sep 4 23:49:50.665425 systemd[1]: Started sshd@22-88.198.151.158:22-139.178.68.195:33396.service - OpenSSH per-connection server daemon (139.178.68.195:33396). Sep 4 23:49:51.341951 containerd[1509]: time="2025-09-04T23:49:51.341890382Z" level=info msg="CreateContainer within sandbox \"893d3c45cc4b4aabae072585ed9f0967ce02cff04d07dd63e98ec0fe1e74a692\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 23:49:51.361519 containerd[1509]: time="2025-09-04T23:49:51.361312622Z" level=info msg="CreateContainer within sandbox \"893d3c45cc4b4aabae072585ed9f0967ce02cff04d07dd63e98ec0fe1e74a692\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"09137f67808f5bf5eef8f5d3fec48c59a1ae872b9e8c4a6fc6348d326bd8cfe7\"" Sep 4 23:49:51.363599 containerd[1509]: time="2025-09-04T23:49:51.363486671Z" level=info msg="StartContainer for \"09137f67808f5bf5eef8f5d3fec48c59a1ae872b9e8c4a6fc6348d326bd8cfe7\"" Sep 4 23:49:51.406948 systemd[1]: Started cri-containerd-09137f67808f5bf5eef8f5d3fec48c59a1ae872b9e8c4a6fc6348d326bd8cfe7.scope - libcontainer container 09137f67808f5bf5eef8f5d3fec48c59a1ae872b9e8c4a6fc6348d326bd8cfe7. Sep 4 23:49:51.434570 systemd[1]: cri-containerd-09137f67808f5bf5eef8f5d3fec48c59a1ae872b9e8c4a6fc6348d326bd8cfe7.scope: Deactivated successfully. Sep 4 23:49:51.440127 containerd[1509]: time="2025-09-04T23:49:51.439404822Z" level=info msg="StartContainer for \"09137f67808f5bf5eef8f5d3fec48c59a1ae872b9e8c4a6fc6348d326bd8cfe7\" returns successfully" Sep 4 23:49:51.459850 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09137f67808f5bf5eef8f5d3fec48c59a1ae872b9e8c4a6fc6348d326bd8cfe7-rootfs.mount: Deactivated successfully. Sep 4 23:49:51.466150 containerd[1509]: time="2025-09-04T23:49:51.466051211Z" level=info msg="shim disconnected" id=09137f67808f5bf5eef8f5d3fec48c59a1ae872b9e8c4a6fc6348d326bd8cfe7 namespace=k8s.io Sep 4 23:49:51.466150 containerd[1509]: time="2025-09-04T23:49:51.466135771Z" level=warning msg="cleaning up after shim disconnected" id=09137f67808f5bf5eef8f5d3fec48c59a1ae872b9e8c4a6fc6348d326bd8cfe7 namespace=k8s.io Sep 4 23:49:51.466150 containerd[1509]: time="2025-09-04T23:49:51.466147851Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:49:51.478115 containerd[1509]: time="2025-09-04T23:49:51.478070180Z" level=warning msg="cleanup warnings time=\"2025-09-04T23:49:51Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 4 23:49:51.667235 sshd[4685]: Accepted publickey for core from 139.178.68.195 port 33396 ssh2: RSA SHA256:mO6YHl7qCkvXk9I2QzSjJf9VN7vVqy+ZQWo85qMF4pQ Sep 4 23:49:51.669874 sshd-session[4685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:49:51.682174 systemd-logind[1483]: New session 23 of user core. Sep 4 23:49:51.689947 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 23:49:52.350951 containerd[1509]: time="2025-09-04T23:49:52.350183313Z" level=info msg="CreateContainer within sandbox \"893d3c45cc4b4aabae072585ed9f0967ce02cff04d07dd63e98ec0fe1e74a692\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 23:49:52.375570 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2743420555.mount: Deactivated successfully. Sep 4 23:49:52.377946 containerd[1509]: time="2025-09-04T23:49:52.377517145Z" level=info msg="CreateContainer within sandbox \"893d3c45cc4b4aabae072585ed9f0967ce02cff04d07dd63e98ec0fe1e74a692\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"95e51d29a8a444782dc6973b4179a36c6266f0268586dd3b5c34b1d5e7f0fe47\"" Sep 4 23:49:52.380308 containerd[1509]: time="2025-09-04T23:49:52.378946110Z" level=info msg="StartContainer for \"95e51d29a8a444782dc6973b4179a36c6266f0268586dd3b5c34b1d5e7f0fe47\"" Sep 4 23:49:52.415866 systemd[1]: Started cri-containerd-95e51d29a8a444782dc6973b4179a36c6266f0268586dd3b5c34b1d5e7f0fe47.scope - libcontainer container 95e51d29a8a444782dc6973b4179a36c6266f0268586dd3b5c34b1d5e7f0fe47. Sep 4 23:49:52.458219 containerd[1509]: time="2025-09-04T23:49:52.458168395Z" level=info msg="StartContainer for \"95e51d29a8a444782dc6973b4179a36c6266f0268586dd3b5c34b1d5e7f0fe47\" returns successfully" Sep 4 23:49:52.771781 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 4 23:49:55.893985 systemd-networkd[1394]: lxc_health: Link UP Sep 4 23:49:55.896085 systemd-networkd[1394]: lxc_health: Gained carrier Sep 4 23:49:56.574643 systemd[1]: run-containerd-runc-k8s.io-95e51d29a8a444782dc6973b4179a36c6266f0268586dd3b5c34b1d5e7f0fe47-runc.8uFJgc.mount: Deactivated successfully. Sep 4 23:49:56.661322 kubelet[2680]: I0904 23:49:56.661250 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xdqnm" podStartSLOduration=8.661229604 podStartE2EDuration="8.661229604s" podCreationTimestamp="2025-09-04 23:49:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:49:53.385823751 +0000 UTC m=+208.895604957" watchObservedRunningTime="2025-09-04 23:49:56.661229604 +0000 UTC m=+212.171010810" Sep 4 23:49:57.141858 systemd-networkd[1394]: lxc_health: Gained IPv6LL Sep 4 23:50:03.147240 systemd[1]: run-containerd-runc-k8s.io-95e51d29a8a444782dc6973b4179a36c6266f0268586dd3b5c34b1d5e7f0fe47-runc.Uz6WAr.mount: Deactivated successfully. Sep 4 23:50:03.376904 sshd[4742]: Connection closed by 139.178.68.195 port 33396 Sep 4 23:50:03.379993 sshd-session[4685]: pam_unix(sshd:session): session closed for user core Sep 4 23:50:03.385264 systemd[1]: sshd@22-88.198.151.158:22-139.178.68.195:33396.service: Deactivated successfully. Sep 4 23:50:03.387487 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 23:50:03.389539 systemd-logind[1483]: Session 23 logged out. Waiting for processes to exit. Sep 4 23:50:03.391327 systemd-logind[1483]: Removed session 23. Sep 4 23:50:18.702240 kubelet[2680]: E0904 23:50:18.701828 2680 controller.go:195] "Failed to update lease" err="Put \"https://88.198.151.158:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-2-n-5840999b78?timeout=10s\": context deadline exceeded" Sep 4 23:50:18.778318 kubelet[2680]: E0904 23:50:18.778087 2680 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:56588->10.0.0.2:2379: read: connection timed out" Sep 4 23:50:18.787124 systemd[1]: cri-containerd-8ad9882d0c4e983b2cf2b7c3f6f8e5e28467cf0aa7f4f67ec31b502d6884caac.scope: Deactivated successfully. Sep 4 23:50:18.787465 systemd[1]: cri-containerd-8ad9882d0c4e983b2cf2b7c3f6f8e5e28467cf0aa7f4f67ec31b502d6884caac.scope: Consumed 4.426s CPU time, 20.3M memory peak. Sep 4 23:50:18.810363 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ad9882d0c4e983b2cf2b7c3f6f8e5e28467cf0aa7f4f67ec31b502d6884caac-rootfs.mount: Deactivated successfully. Sep 4 23:50:18.820756 containerd[1509]: time="2025-09-04T23:50:18.820682157Z" level=info msg="shim disconnected" id=8ad9882d0c4e983b2cf2b7c3f6f8e5e28467cf0aa7f4f67ec31b502d6884caac namespace=k8s.io Sep 4 23:50:18.820756 containerd[1509]: time="2025-09-04T23:50:18.820750478Z" level=warning msg="cleaning up after shim disconnected" id=8ad9882d0c4e983b2cf2b7c3f6f8e5e28467cf0aa7f4f67ec31b502d6884caac namespace=k8s.io Sep 4 23:50:18.820756 containerd[1509]: time="2025-09-04T23:50:18.820762638Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:50:18.835094 containerd[1509]: time="2025-09-04T23:50:18.835039575Z" level=warning msg="cleanup warnings time=\"2025-09-04T23:50:18Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 4 23:50:19.126723 systemd[1]: cri-containerd-85d0bc11df64fe49968372a55c86726b7dce201cb0d610a87a9728466c58bb04.scope: Deactivated successfully. Sep 4 23:50:19.128624 systemd[1]: cri-containerd-85d0bc11df64fe49968372a55c86726b7dce201cb0d610a87a9728466c58bb04.scope: Consumed 5.160s CPU time, 60M memory peak. Sep 4 23:50:19.151609 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-85d0bc11df64fe49968372a55c86726b7dce201cb0d610a87a9728466c58bb04-rootfs.mount: Deactivated successfully. Sep 4 23:50:19.162103 containerd[1509]: time="2025-09-04T23:50:19.161971887Z" level=info msg="shim disconnected" id=85d0bc11df64fe49968372a55c86726b7dce201cb0d610a87a9728466c58bb04 namespace=k8s.io Sep 4 23:50:19.162779 containerd[1509]: time="2025-09-04T23:50:19.162453689Z" level=warning msg="cleaning up after shim disconnected" id=85d0bc11df64fe49968372a55c86726b7dce201cb0d610a87a9728466c58bb04 namespace=k8s.io Sep 4 23:50:19.162779 containerd[1509]: time="2025-09-04T23:50:19.162487569Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:50:19.423823 kubelet[2680]: I0904 23:50:19.423278 2680 scope.go:117] "RemoveContainer" containerID="8ad9882d0c4e983b2cf2b7c3f6f8e5e28467cf0aa7f4f67ec31b502d6884caac" Sep 4 23:50:19.427722 kubelet[2680]: I0904 23:50:19.427686 2680 scope.go:117] "RemoveContainer" containerID="85d0bc11df64fe49968372a55c86726b7dce201cb0d610a87a9728466c58bb04" Sep 4 23:50:19.428329 containerd[1509]: time="2025-09-04T23:50:19.428199475Z" level=info msg="CreateContainer within sandbox \"448a2a55c181f84bd1e2cff608383a227d7e96381af91ab388d28f1879188256\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 4 23:50:19.431108 containerd[1509]: time="2025-09-04T23:50:19.431051527Z" level=info msg="CreateContainer within sandbox \"0fb450b13a2004d08d3a30be264ba05466ac0e2a43b005d465b21ce0b177c2ab\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 4 23:50:19.448878 containerd[1509]: time="2025-09-04T23:50:19.448743158Z" level=info msg="CreateContainer within sandbox \"448a2a55c181f84bd1e2cff608383a227d7e96381af91ab388d28f1879188256\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"86ca6f34842ec249428b1eea26718ff6e77713c47e7708e331218c8c9916edd2\"" Sep 4 23:50:19.449656 containerd[1509]: time="2025-09-04T23:50:19.449485001Z" level=info msg="StartContainer for \"86ca6f34842ec249428b1eea26718ff6e77713c47e7708e331218c8c9916edd2\"" Sep 4 23:50:19.452822 containerd[1509]: time="2025-09-04T23:50:19.452364812Z" level=info msg="CreateContainer within sandbox \"0fb450b13a2004d08d3a30be264ba05466ac0e2a43b005d465b21ce0b177c2ab\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"2f30e0dbe24dfbf599dd2422653036564a09cb9b8726021b3aef9b35b568188a\"" Sep 4 23:50:19.454686 containerd[1509]: time="2025-09-04T23:50:19.453327936Z" level=info msg="StartContainer for \"2f30e0dbe24dfbf599dd2422653036564a09cb9b8726021b3aef9b35b568188a\"" Sep 4 23:50:19.482996 systemd[1]: Started cri-containerd-2f30e0dbe24dfbf599dd2422653036564a09cb9b8726021b3aef9b35b568188a.scope - libcontainer container 2f30e0dbe24dfbf599dd2422653036564a09cb9b8726021b3aef9b35b568188a. Sep 4 23:50:19.490921 systemd[1]: Started cri-containerd-86ca6f34842ec249428b1eea26718ff6e77713c47e7708e331218c8c9916edd2.scope - libcontainer container 86ca6f34842ec249428b1eea26718ff6e77713c47e7708e331218c8c9916edd2. Sep 4 23:50:19.550538 containerd[1509]: time="2025-09-04T23:50:19.550483846Z" level=info msg="StartContainer for \"2f30e0dbe24dfbf599dd2422653036564a09cb9b8726021b3aef9b35b568188a\" returns successfully" Sep 4 23:50:19.550829 containerd[1509]: time="2025-09-04T23:50:19.550621046Z" level=info msg="StartContainer for \"86ca6f34842ec249428b1eea26718ff6e77713c47e7708e331218c8c9916edd2\" returns successfully" Sep 4 23:50:22.808894 kubelet[2680]: E0904 23:50:22.808297 2680 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:56396->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4230-2-2-n-5840999b78.1862394e2308d04f kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4230-2-2-n-5840999b78,UID:5b89976acd755330372f010fae06f648,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4230-2-2-n-5840999b78,},FirstTimestamp:2025-09-04 23:50:12.385927247 +0000 UTC m=+227.895708493,LastTimestamp:2025-09-04 23:50:12.385927247 +0000 UTC m=+227.895708493,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-2-2-n-5840999b78,}"