Sep 5 23:50:42.915003 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 5 23:50:42.915041 kernel: Linux version 6.6.103-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 5 22:30:47 -00 2025 Sep 5 23:50:42.915052 kernel: KASLR enabled Sep 5 23:50:42.915058 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Sep 5 23:50:42.915065 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Sep 5 23:50:42.915071 kernel: random: crng init done Sep 5 23:50:42.915078 kernel: ACPI: Early table checksum verification disabled Sep 5 23:50:42.915084 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Sep 5 23:50:42.915091 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Sep 5 23:50:42.915099 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:50:42.915106 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:50:42.915113 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:50:42.915119 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:50:42.915126 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:50:42.915134 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:50:42.915142 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:50:42.915149 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:50:42.915155 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:50:42.915162 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Sep 5 23:50:42.915169 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Sep 5 23:50:42.915176 kernel: NUMA: Failed to initialise from firmware Sep 5 23:50:42.915182 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Sep 5 23:50:42.915204 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Sep 5 23:50:42.915310 kernel: Zone ranges: Sep 5 23:50:42.915319 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 5 23:50:42.915330 kernel: DMA32 empty Sep 5 23:50:42.915337 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Sep 5 23:50:42.915344 kernel: Movable zone start for each node Sep 5 23:50:42.915350 kernel: Early memory node ranges Sep 5 23:50:42.915357 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Sep 5 23:50:42.915364 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Sep 5 23:50:42.915371 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Sep 5 23:50:42.915377 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Sep 5 23:50:42.915384 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Sep 5 23:50:42.915390 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Sep 5 23:50:42.915397 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Sep 5 23:50:42.915404 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Sep 5 23:50:42.915412 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Sep 5 23:50:42.915419 kernel: psci: probing for conduit method from ACPI. Sep 5 23:50:42.915426 kernel: psci: PSCIv1.1 detected in firmware. Sep 5 23:50:42.915436 kernel: psci: Using standard PSCI v0.2 function IDs Sep 5 23:50:42.915443 kernel: psci: Trusted OS migration not required Sep 5 23:50:42.915451 kernel: psci: SMC Calling Convention v1.1 Sep 5 23:50:42.915459 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 5 23:50:42.915467 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 5 23:50:42.915474 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 5 23:50:42.915481 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 5 23:50:42.915488 kernel: Detected PIPT I-cache on CPU0 Sep 5 23:50:42.915495 kernel: CPU features: detected: GIC system register CPU interface Sep 5 23:50:42.915503 kernel: CPU features: detected: Hardware dirty bit management Sep 5 23:50:42.915510 kernel: CPU features: detected: Spectre-v4 Sep 5 23:50:42.915517 kernel: CPU features: detected: Spectre-BHB Sep 5 23:50:42.915524 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 5 23:50:42.915533 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 5 23:50:42.915541 kernel: CPU features: detected: ARM erratum 1418040 Sep 5 23:50:42.915549 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 5 23:50:42.915556 kernel: alternatives: applying boot alternatives Sep 5 23:50:42.915564 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=ac831c89fe9ee7829b7371dadfb138f8d0e2b31ae3a5a920e0eba13bbab016c3 Sep 5 23:50:42.915572 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 5 23:50:42.915579 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 5 23:50:42.915586 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 5 23:50:42.915594 kernel: Fallback order for Node 0: 0 Sep 5 23:50:42.915601 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Sep 5 23:50:42.915608 kernel: Policy zone: Normal Sep 5 23:50:42.915617 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 5 23:50:42.915624 kernel: software IO TLB: area num 2. Sep 5 23:50:42.915631 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Sep 5 23:50:42.915639 kernel: Memory: 3882808K/4096000K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 213192K reserved, 0K cma-reserved) Sep 5 23:50:42.915647 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 5 23:50:42.915654 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 5 23:50:42.915662 kernel: rcu: RCU event tracing is enabled. Sep 5 23:50:42.915669 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 5 23:50:42.915677 kernel: Trampoline variant of Tasks RCU enabled. Sep 5 23:50:42.915684 kernel: Tracing variant of Tasks RCU enabled. Sep 5 23:50:42.915691 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 5 23:50:42.915700 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 5 23:50:42.915707 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 5 23:50:42.915714 kernel: GICv3: 256 SPIs implemented Sep 5 23:50:42.915721 kernel: GICv3: 0 Extended SPIs implemented Sep 5 23:50:42.915729 kernel: Root IRQ handler: gic_handle_irq Sep 5 23:50:42.915736 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 5 23:50:42.915743 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 5 23:50:42.915750 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 5 23:50:42.915758 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Sep 5 23:50:42.915765 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Sep 5 23:50:42.915772 kernel: GICv3: using LPI property table @0x00000001000e0000 Sep 5 23:50:42.915780 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Sep 5 23:50:42.915788 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 5 23:50:42.915795 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 23:50:42.915803 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 5 23:50:42.915810 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 5 23:50:42.915817 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 5 23:50:42.915825 kernel: Console: colour dummy device 80x25 Sep 5 23:50:42.915832 kernel: ACPI: Core revision 20230628 Sep 5 23:50:42.915840 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 5 23:50:42.915847 kernel: pid_max: default: 32768 minimum: 301 Sep 5 23:50:42.915855 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 5 23:50:42.915864 kernel: landlock: Up and running. Sep 5 23:50:42.915872 kernel: SELinux: Initializing. Sep 5 23:50:42.915879 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 23:50:42.915886 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 23:50:42.915894 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 5 23:50:42.915901 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 5 23:50:42.915909 kernel: rcu: Hierarchical SRCU implementation. Sep 5 23:50:42.915916 kernel: rcu: Max phase no-delay instances is 400. Sep 5 23:50:42.915924 kernel: Platform MSI: ITS@0x8080000 domain created Sep 5 23:50:42.915933 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 5 23:50:42.915940 kernel: Remapping and enabling EFI services. Sep 5 23:50:42.915948 kernel: smp: Bringing up secondary CPUs ... Sep 5 23:50:42.915955 kernel: Detected PIPT I-cache on CPU1 Sep 5 23:50:42.915963 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 5 23:50:42.915970 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Sep 5 23:50:42.915978 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 23:50:42.915985 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 5 23:50:42.915993 kernel: smp: Brought up 1 node, 2 CPUs Sep 5 23:50:42.916000 kernel: SMP: Total of 2 processors activated. Sep 5 23:50:42.916009 kernel: CPU features: detected: 32-bit EL0 Support Sep 5 23:50:42.916036 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 5 23:50:42.916052 kernel: CPU features: detected: Common not Private translations Sep 5 23:50:42.916061 kernel: CPU features: detected: CRC32 instructions Sep 5 23:50:42.916069 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 5 23:50:42.916077 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 5 23:50:42.916084 kernel: CPU features: detected: LSE atomic instructions Sep 5 23:50:42.916092 kernel: CPU features: detected: Privileged Access Never Sep 5 23:50:42.916100 kernel: CPU features: detected: RAS Extension Support Sep 5 23:50:42.916110 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 5 23:50:42.916118 kernel: CPU: All CPU(s) started at EL1 Sep 5 23:50:42.916126 kernel: alternatives: applying system-wide alternatives Sep 5 23:50:42.916133 kernel: devtmpfs: initialized Sep 5 23:50:42.916141 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 5 23:50:42.916149 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 5 23:50:42.916157 kernel: pinctrl core: initialized pinctrl subsystem Sep 5 23:50:42.916166 kernel: SMBIOS 3.0.0 present. Sep 5 23:50:42.916174 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Sep 5 23:50:42.916182 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 5 23:50:42.916208 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 5 23:50:42.916216 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 5 23:50:42.916224 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 5 23:50:42.916232 kernel: audit: initializing netlink subsys (disabled) Sep 5 23:50:42.916240 kernel: audit: type=2000 audit(0.016:1): state=initialized audit_enabled=0 res=1 Sep 5 23:50:42.916247 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 5 23:50:42.916258 kernel: cpuidle: using governor menu Sep 5 23:50:42.916266 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 5 23:50:42.916274 kernel: ASID allocator initialised with 32768 entries Sep 5 23:50:42.916282 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 5 23:50:42.916289 kernel: Serial: AMBA PL011 UART driver Sep 5 23:50:42.916297 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 5 23:50:42.916305 kernel: Modules: 0 pages in range for non-PLT usage Sep 5 23:50:42.916313 kernel: Modules: 509008 pages in range for PLT usage Sep 5 23:50:42.916321 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 5 23:50:42.916330 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 5 23:50:42.916338 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 5 23:50:42.916346 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 5 23:50:42.916354 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 5 23:50:42.916361 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 5 23:50:42.916369 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 5 23:50:42.916377 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 5 23:50:42.916385 kernel: ACPI: Added _OSI(Module Device) Sep 5 23:50:42.916392 kernel: ACPI: Added _OSI(Processor Device) Sep 5 23:50:42.916402 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 5 23:50:42.916409 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 5 23:50:42.916417 kernel: ACPI: Interpreter enabled Sep 5 23:50:42.916425 kernel: ACPI: Using GIC for interrupt routing Sep 5 23:50:42.916433 kernel: ACPI: MCFG table detected, 1 entries Sep 5 23:50:42.916440 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 5 23:50:42.916448 kernel: printk: console [ttyAMA0] enabled Sep 5 23:50:42.916456 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 5 23:50:42.916656 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 5 23:50:42.916743 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 5 23:50:42.916818 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 5 23:50:42.916887 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 5 23:50:42.916954 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 5 23:50:42.916965 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 5 23:50:42.916973 kernel: PCI host bridge to bus 0000:00 Sep 5 23:50:42.917101 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 5 23:50:42.917177 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 5 23:50:42.917280 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 5 23:50:42.917352 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 5 23:50:42.917440 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 5 23:50:42.917523 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Sep 5 23:50:42.917595 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Sep 5 23:50:42.917669 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Sep 5 23:50:42.917753 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Sep 5 23:50:42.917823 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Sep 5 23:50:42.917911 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Sep 5 23:50:42.917982 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Sep 5 23:50:42.918080 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Sep 5 23:50:42.918159 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Sep 5 23:50:42.918277 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Sep 5 23:50:42.918353 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Sep 5 23:50:42.918434 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Sep 5 23:50:42.918526 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Sep 5 23:50:42.918609 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Sep 5 23:50:42.918684 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Sep 5 23:50:42.918763 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Sep 5 23:50:42.918833 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Sep 5 23:50:42.918909 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Sep 5 23:50:42.918978 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Sep 5 23:50:42.919076 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Sep 5 23:50:42.919149 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Sep 5 23:50:42.919249 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Sep 5 23:50:42.919323 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Sep 5 23:50:42.919405 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Sep 5 23:50:42.919477 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Sep 5 23:50:42.919547 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 5 23:50:42.919619 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Sep 5 23:50:42.919703 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Sep 5 23:50:42.919778 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Sep 5 23:50:42.919857 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Sep 5 23:50:42.919930 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Sep 5 23:50:42.920001 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Sep 5 23:50:42.920130 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Sep 5 23:50:42.920233 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Sep 5 23:50:42.920329 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Sep 5 23:50:42.920402 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Sep 5 23:50:42.920481 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Sep 5 23:50:42.920552 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Sep 5 23:50:42.920624 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Sep 5 23:50:42.920704 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Sep 5 23:50:42.920780 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Sep 5 23:50:42.920851 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Sep 5 23:50:42.920924 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Sep 5 23:50:42.921001 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Sep 5 23:50:42.921085 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Sep 5 23:50:42.921157 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Sep 5 23:50:42.921600 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Sep 5 23:50:42.921682 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Sep 5 23:50:42.921750 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Sep 5 23:50:42.921820 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Sep 5 23:50:42.921888 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Sep 5 23:50:42.921956 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Sep 5 23:50:42.922074 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Sep 5 23:50:42.922153 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Sep 5 23:50:42.922246 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Sep 5 23:50:42.922323 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Sep 5 23:50:42.922402 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Sep 5 23:50:42.922471 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Sep 5 23:50:42.922547 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Sep 5 23:50:42.922617 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Sep 5 23:50:42.922686 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Sep 5 23:50:42.922764 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 5 23:50:42.922835 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Sep 5 23:50:42.922902 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Sep 5 23:50:42.922974 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 5 23:50:42.923059 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Sep 5 23:50:42.923131 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Sep 5 23:50:42.923691 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 5 23:50:42.923805 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Sep 5 23:50:42.923883 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Sep 5 23:50:42.923956 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Sep 5 23:50:42.924043 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Sep 5 23:50:42.924122 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Sep 5 23:50:42.924234 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Sep 5 23:50:42.924316 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Sep 5 23:50:42.924385 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Sep 5 23:50:42.924471 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Sep 5 23:50:42.924540 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Sep 5 23:50:42.924612 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Sep 5 23:50:42.924681 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Sep 5 23:50:42.924753 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Sep 5 23:50:42.924821 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 5 23:50:42.924897 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Sep 5 23:50:42.924968 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 5 23:50:42.925091 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Sep 5 23:50:42.925168 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 5 23:50:42.925341 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Sep 5 23:50:42.925413 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Sep 5 23:50:42.925488 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Sep 5 23:50:42.925562 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Sep 5 23:50:42.925631 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Sep 5 23:50:42.925698 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Sep 5 23:50:42.925766 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Sep 5 23:50:42.925833 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Sep 5 23:50:42.925901 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Sep 5 23:50:42.925967 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Sep 5 23:50:42.926050 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Sep 5 23:50:42.926126 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Sep 5 23:50:42.926248 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Sep 5 23:50:42.926322 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Sep 5 23:50:42.926392 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Sep 5 23:50:42.926460 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Sep 5 23:50:42.926528 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Sep 5 23:50:42.926596 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Sep 5 23:50:42.926665 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Sep 5 23:50:42.926736 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Sep 5 23:50:42.926804 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Sep 5 23:50:42.926871 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Sep 5 23:50:42.926943 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Sep 5 23:50:42.927062 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Sep 5 23:50:42.927144 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 5 23:50:42.927252 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Sep 5 23:50:42.927323 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Sep 5 23:50:42.927400 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Sep 5 23:50:42.927467 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Sep 5 23:50:42.927536 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Sep 5 23:50:42.929393 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Sep 5 23:50:42.929494 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Sep 5 23:50:42.929571 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Sep 5 23:50:42.929640 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Sep 5 23:50:42.929708 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Sep 5 23:50:42.929788 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Sep 5 23:50:42.929864 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Sep 5 23:50:42.929944 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Sep 5 23:50:42.930062 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Sep 5 23:50:42.930152 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Sep 5 23:50:42.930259 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Sep 5 23:50:42.930355 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Sep 5 23:50:42.930429 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Sep 5 23:50:42.930498 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Sep 5 23:50:42.930565 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Sep 5 23:50:42.930635 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Sep 5 23:50:42.930714 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Sep 5 23:50:42.930794 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Sep 5 23:50:42.930863 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Sep 5 23:50:42.930932 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Sep 5 23:50:42.931002 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Sep 5 23:50:42.931106 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Sep 5 23:50:42.931181 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Sep 5 23:50:42.933628 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Sep 5 23:50:42.933705 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Sep 5 23:50:42.933781 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Sep 5 23:50:42.933849 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 5 23:50:42.933928 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Sep 5 23:50:42.933998 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Sep 5 23:50:42.934097 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Sep 5 23:50:42.934172 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Sep 5 23:50:42.934269 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Sep 5 23:50:42.934341 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Sep 5 23:50:42.934416 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 5 23:50:42.934494 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Sep 5 23:50:42.934563 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Sep 5 23:50:42.934632 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Sep 5 23:50:42.934701 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 5 23:50:42.934774 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Sep 5 23:50:42.934843 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Sep 5 23:50:42.934909 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Sep 5 23:50:42.934980 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Sep 5 23:50:42.935117 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 5 23:50:42.936229 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 5 23:50:42.936340 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 5 23:50:42.936421 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Sep 5 23:50:42.936488 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Sep 5 23:50:42.936551 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Sep 5 23:50:42.936641 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Sep 5 23:50:42.936710 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Sep 5 23:50:42.936774 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Sep 5 23:50:42.936847 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Sep 5 23:50:42.936912 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Sep 5 23:50:42.936975 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Sep 5 23:50:42.937071 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Sep 5 23:50:42.937142 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Sep 5 23:50:42.937229 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Sep 5 23:50:42.937326 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Sep 5 23:50:42.937403 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Sep 5 23:50:42.937476 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Sep 5 23:50:42.937553 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Sep 5 23:50:42.937621 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Sep 5 23:50:42.937685 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 5 23:50:42.937757 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Sep 5 23:50:42.937822 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Sep 5 23:50:42.937904 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 5 23:50:42.937982 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Sep 5 23:50:42.938100 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Sep 5 23:50:42.938176 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 5 23:50:42.938292 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Sep 5 23:50:42.938362 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Sep 5 23:50:42.938427 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Sep 5 23:50:42.938442 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 5 23:50:42.938451 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 5 23:50:42.938460 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 5 23:50:42.938468 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 5 23:50:42.938477 kernel: iommu: Default domain type: Translated Sep 5 23:50:42.938485 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 5 23:50:42.938494 kernel: efivars: Registered efivars operations Sep 5 23:50:42.938503 kernel: vgaarb: loaded Sep 5 23:50:42.938511 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 5 23:50:42.938521 kernel: VFS: Disk quotas dquot_6.6.0 Sep 5 23:50:42.938530 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 5 23:50:42.938539 kernel: pnp: PnP ACPI init Sep 5 23:50:42.938620 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 5 23:50:42.938632 kernel: pnp: PnP ACPI: found 1 devices Sep 5 23:50:42.938641 kernel: NET: Registered PF_INET protocol family Sep 5 23:50:42.938649 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 5 23:50:42.938659 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 5 23:50:42.938669 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 5 23:50:42.938679 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 5 23:50:42.938687 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 5 23:50:42.938696 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 5 23:50:42.938704 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 23:50:42.938712 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 23:50:42.938721 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 5 23:50:42.938804 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Sep 5 23:50:42.938816 kernel: PCI: CLS 0 bytes, default 64 Sep 5 23:50:42.938827 kernel: kvm [1]: HYP mode not available Sep 5 23:50:42.938836 kernel: Initialise system trusted keyrings Sep 5 23:50:42.938844 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 5 23:50:42.938852 kernel: Key type asymmetric registered Sep 5 23:50:42.938861 kernel: Asymmetric key parser 'x509' registered Sep 5 23:50:42.938869 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 5 23:50:42.938877 kernel: io scheduler mq-deadline registered Sep 5 23:50:42.938886 kernel: io scheduler kyber registered Sep 5 23:50:42.938894 kernel: io scheduler bfq registered Sep 5 23:50:42.938905 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 5 23:50:42.938979 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Sep 5 23:50:42.939067 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Sep 5 23:50:42.939140 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 23:50:42.940384 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Sep 5 23:50:42.940479 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Sep 5 23:50:42.940556 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 23:50:42.940628 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Sep 5 23:50:42.940698 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Sep 5 23:50:42.940765 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 23:50:42.940840 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Sep 5 23:50:42.940908 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Sep 5 23:50:42.940991 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 23:50:42.941088 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Sep 5 23:50:42.941161 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Sep 5 23:50:42.942328 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 23:50:42.942419 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Sep 5 23:50:42.942492 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Sep 5 23:50:42.942568 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 23:50:42.942642 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Sep 5 23:50:42.942713 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Sep 5 23:50:42.942782 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 23:50:42.942858 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Sep 5 23:50:42.942928 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Sep 5 23:50:42.943001 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 23:50:42.943058 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Sep 5 23:50:42.943149 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Sep 5 23:50:42.943246 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Sep 5 23:50:42.943328 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 23:50:42.943340 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 5 23:50:42.943349 kernel: ACPI: button: Power Button [PWRB] Sep 5 23:50:42.943359 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 5 23:50:42.943445 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Sep 5 23:50:42.943526 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Sep 5 23:50:42.943539 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 5 23:50:42.943548 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 5 23:50:42.943633 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Sep 5 23:50:42.943645 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Sep 5 23:50:42.943654 kernel: thunder_xcv, ver 1.0 Sep 5 23:50:42.943662 kernel: thunder_bgx, ver 1.0 Sep 5 23:50:42.943674 kernel: nicpf, ver 1.0 Sep 5 23:50:42.943682 kernel: nicvf, ver 1.0 Sep 5 23:50:42.943771 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 5 23:50:42.943838 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-05T23:50:42 UTC (1757116242) Sep 5 23:50:42.943850 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 5 23:50:42.943859 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 5 23:50:42.943867 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 5 23:50:42.943876 kernel: watchdog: Hard watchdog permanently disabled Sep 5 23:50:42.943887 kernel: NET: Registered PF_INET6 protocol family Sep 5 23:50:42.943895 kernel: Segment Routing with IPv6 Sep 5 23:50:42.943904 kernel: In-situ OAM (IOAM) with IPv6 Sep 5 23:50:42.943913 kernel: NET: Registered PF_PACKET protocol family Sep 5 23:50:42.943921 kernel: Key type dns_resolver registered Sep 5 23:50:42.943930 kernel: registered taskstats version 1 Sep 5 23:50:42.943938 kernel: Loading compiled-in X.509 certificates Sep 5 23:50:42.943946 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.103-flatcar: 5b16e1dfa86dac534548885fd675b87757ff9e20' Sep 5 23:50:42.943954 kernel: Key type .fscrypt registered Sep 5 23:50:42.943965 kernel: Key type fscrypt-provisioning registered Sep 5 23:50:42.943973 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 5 23:50:42.943981 kernel: ima: Allocated hash algorithm: sha1 Sep 5 23:50:42.943989 kernel: ima: No architecture policies found Sep 5 23:50:42.943998 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 5 23:50:42.944006 kernel: clk: Disabling unused clocks Sep 5 23:50:42.944024 kernel: Freeing unused kernel memory: 39424K Sep 5 23:50:42.944033 kernel: Run /init as init process Sep 5 23:50:42.944041 kernel: with arguments: Sep 5 23:50:42.944052 kernel: /init Sep 5 23:50:42.944060 kernel: with environment: Sep 5 23:50:42.944068 kernel: HOME=/ Sep 5 23:50:42.944077 kernel: TERM=linux Sep 5 23:50:42.944084 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 5 23:50:42.944095 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 23:50:42.944106 systemd[1]: Detected virtualization kvm. Sep 5 23:50:42.944115 systemd[1]: Detected architecture arm64. Sep 5 23:50:42.944126 systemd[1]: Running in initrd. Sep 5 23:50:42.944134 systemd[1]: No hostname configured, using default hostname. Sep 5 23:50:42.944142 systemd[1]: Hostname set to . Sep 5 23:50:42.944151 systemd[1]: Initializing machine ID from VM UUID. Sep 5 23:50:42.944161 systemd[1]: Queued start job for default target initrd.target. Sep 5 23:50:42.944170 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 23:50:42.944179 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 23:50:42.945667 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 5 23:50:42.945688 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 23:50:42.945697 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 5 23:50:42.945706 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 5 23:50:42.945717 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 5 23:50:42.945726 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 5 23:50:42.945737 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 23:50:42.945746 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 23:50:42.945757 systemd[1]: Reached target paths.target - Path Units. Sep 5 23:50:42.945765 systemd[1]: Reached target slices.target - Slice Units. Sep 5 23:50:42.945774 systemd[1]: Reached target swap.target - Swaps. Sep 5 23:50:42.945783 systemd[1]: Reached target timers.target - Timer Units. Sep 5 23:50:42.945792 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 23:50:42.945805 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 23:50:42.945816 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 5 23:50:42.945826 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 5 23:50:42.945837 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 23:50:42.945846 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 23:50:42.945855 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 23:50:42.945864 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 23:50:42.945873 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 5 23:50:42.945882 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 23:50:42.945891 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 5 23:50:42.945900 systemd[1]: Starting systemd-fsck-usr.service... Sep 5 23:50:42.945908 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 23:50:42.945919 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 23:50:42.945961 systemd-journald[237]: Collecting audit messages is disabled. Sep 5 23:50:42.945984 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:50:42.945993 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 5 23:50:42.946004 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 23:50:42.946055 systemd[1]: Finished systemd-fsck-usr.service. Sep 5 23:50:42.946066 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 23:50:42.946077 systemd-journald[237]: Journal started Sep 5 23:50:42.946100 systemd-journald[237]: Runtime Journal (/run/log/journal/c7bc1e56bd654bf8857284669a22eeac) is 8.0M, max 76.6M, 68.6M free. Sep 5 23:50:42.931918 systemd-modules-load[238]: Inserted module 'overlay' Sep 5 23:50:42.949224 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 23:50:42.951763 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:50:42.955253 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 5 23:50:42.961343 kernel: Bridge firewalling registered Sep 5 23:50:42.961458 systemd-modules-load[238]: Inserted module 'br_netfilter' Sep 5 23:50:42.965493 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 23:50:42.967924 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 23:50:42.968992 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 23:50:42.969966 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 23:50:42.980477 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 23:50:42.987287 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 23:50:43.003316 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 23:50:43.005568 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 23:50:43.008342 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:50:43.015440 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 5 23:50:43.021539 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 23:50:43.024781 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 23:50:43.039844 dracut-cmdline[270]: dracut-dracut-053 Sep 5 23:50:43.044459 dracut-cmdline[270]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=ac831c89fe9ee7829b7371dadfb138f8d0e2b31ae3a5a920e0eba13bbab016c3 Sep 5 23:50:43.063557 systemd-resolved[272]: Positive Trust Anchors: Sep 5 23:50:43.063573 systemd-resolved[272]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 23:50:43.063605 systemd-resolved[272]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 23:50:43.070126 systemd-resolved[272]: Defaulting to hostname 'linux'. Sep 5 23:50:43.072521 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 23:50:43.073270 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 23:50:43.151239 kernel: SCSI subsystem initialized Sep 5 23:50:43.156219 kernel: Loading iSCSI transport class v2.0-870. Sep 5 23:50:43.164563 kernel: iscsi: registered transport (tcp) Sep 5 23:50:43.178243 kernel: iscsi: registered transport (qla4xxx) Sep 5 23:50:43.178314 kernel: QLogic iSCSI HBA Driver Sep 5 23:50:43.222148 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 5 23:50:43.228396 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 5 23:50:43.251857 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 5 23:50:43.251986 kernel: device-mapper: uevent: version 1.0.3 Sep 5 23:50:43.252034 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 5 23:50:43.302294 kernel: raid6: neonx8 gen() 15635 MB/s Sep 5 23:50:43.319232 kernel: raid6: neonx4 gen() 15568 MB/s Sep 5 23:50:43.336248 kernel: raid6: neonx2 gen() 13120 MB/s Sep 5 23:50:43.353233 kernel: raid6: neonx1 gen() 10444 MB/s Sep 5 23:50:43.370277 kernel: raid6: int64x8 gen() 6924 MB/s Sep 5 23:50:43.387225 kernel: raid6: int64x4 gen() 7248 MB/s Sep 5 23:50:43.404240 kernel: raid6: int64x2 gen() 6092 MB/s Sep 5 23:50:43.421242 kernel: raid6: int64x1 gen() 5031 MB/s Sep 5 23:50:43.421297 kernel: raid6: using algorithm neonx8 gen() 15635 MB/s Sep 5 23:50:43.438230 kernel: raid6: .... xor() 11953 MB/s, rmw enabled Sep 5 23:50:43.438277 kernel: raid6: using neon recovery algorithm Sep 5 23:50:43.443217 kernel: xor: measuring software checksum speed Sep 5 23:50:43.443272 kernel: 8regs : 19793 MB/sec Sep 5 23:50:43.444433 kernel: 32regs : 17470 MB/sec Sep 5 23:50:43.444473 kernel: arm64_neon : 26874 MB/sec Sep 5 23:50:43.444496 kernel: xor: using function: arm64_neon (26874 MB/sec) Sep 5 23:50:43.497237 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 5 23:50:43.511532 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 5 23:50:43.518886 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 23:50:43.534531 systemd-udevd[455]: Using default interface naming scheme 'v255'. Sep 5 23:50:43.538020 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 23:50:43.544551 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 5 23:50:43.562615 dracut-pre-trigger[462]: rd.md=0: removing MD RAID activation Sep 5 23:50:43.599509 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 23:50:43.605418 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 23:50:43.662494 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 23:50:43.671425 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 5 23:50:43.692785 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 5 23:50:43.695826 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 23:50:43.698404 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 23:50:43.700529 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 23:50:43.706427 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 5 23:50:43.730093 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 5 23:50:43.775269 kernel: scsi host0: Virtio SCSI HBA Sep 5 23:50:43.785915 kernel: ACPI: bus type USB registered Sep 5 23:50:43.785988 kernel: usbcore: registered new interface driver usbfs Sep 5 23:50:43.787217 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 5 23:50:43.788213 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Sep 5 23:50:43.796205 kernel: usbcore: registered new interface driver hub Sep 5 23:50:43.796366 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 23:50:43.796477 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:50:43.798459 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 23:50:43.802924 kernel: usbcore: registered new device driver usb Sep 5 23:50:43.799072 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 23:50:43.799143 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:50:43.801921 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:50:43.811724 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:50:43.829492 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:50:43.839628 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 23:50:43.850618 kernel: sr 0:0:0:0: Power-on or device reset occurred Sep 5 23:50:43.857208 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 5 23:50:43.857439 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Sep 5 23:50:43.857532 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Sep 5 23:50:43.859232 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 5 23:50:43.859294 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Sep 5 23:50:43.861618 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 5 23:50:43.861810 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Sep 5 23:50:43.861903 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Sep 5 23:50:43.862841 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Sep 5 23:50:43.864806 kernel: hub 1-0:1.0: USB hub found Sep 5 23:50:43.865051 kernel: hub 1-0:1.0: 4 ports detected Sep 5 23:50:43.865166 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Sep 5 23:50:43.866567 kernel: hub 2-0:1.0: USB hub found Sep 5 23:50:43.866795 kernel: sd 0:0:0:1: Power-on or device reset occurred Sep 5 23:50:43.868243 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Sep 5 23:50:43.869105 kernel: sd 0:0:0:1: [sda] Write Protect is off Sep 5 23:50:43.869232 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Sep 5 23:50:43.869324 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 5 23:50:43.871252 kernel: hub 2-0:1.0: 4 ports detected Sep 5 23:50:43.873384 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 5 23:50:43.873430 kernel: GPT:17805311 != 80003071 Sep 5 23:50:43.873529 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:50:43.879284 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 5 23:50:43.879308 kernel: GPT:17805311 != 80003071 Sep 5 23:50:43.879319 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 5 23:50:43.879338 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 5 23:50:43.879348 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Sep 5 23:50:43.922212 kernel: BTRFS: device fsid 045c118e-b098-46f0-884a-43665575c70e devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (517) Sep 5 23:50:43.929211 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (523) Sep 5 23:50:43.936259 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Sep 5 23:50:43.948791 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Sep 5 23:50:43.956262 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 5 23:50:43.963567 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Sep 5 23:50:43.964622 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Sep 5 23:50:43.971454 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 5 23:50:43.981093 disk-uuid[571]: Primary Header is updated. Sep 5 23:50:43.981093 disk-uuid[571]: Secondary Entries is updated. Sep 5 23:50:43.981093 disk-uuid[571]: Secondary Header is updated. Sep 5 23:50:43.996863 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 5 23:50:43.999223 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 5 23:50:44.103085 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Sep 5 23:50:44.236056 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Sep 5 23:50:44.236156 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Sep 5 23:50:44.236539 kernel: usbcore: registered new interface driver usbhid Sep 5 23:50:44.236569 kernel: usbhid: USB HID core driver Sep 5 23:50:44.344273 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Sep 5 23:50:44.474231 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Sep 5 23:50:44.527292 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Sep 5 23:50:45.010019 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 5 23:50:45.010083 disk-uuid[572]: The operation has completed successfully. Sep 5 23:50:45.083982 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 5 23:50:45.084108 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 5 23:50:45.102496 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 5 23:50:45.109744 sh[587]: Success Sep 5 23:50:45.125448 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 5 23:50:45.193790 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 5 23:50:45.208364 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 5 23:50:45.214382 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 5 23:50:45.237958 kernel: BTRFS info (device dm-0): first mount of filesystem 045c118e-b098-46f0-884a-43665575c70e Sep 5 23:50:45.238056 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:50:45.238080 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 5 23:50:45.238905 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 5 23:50:45.239585 kernel: BTRFS info (device dm-0): using free space tree Sep 5 23:50:45.247221 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 5 23:50:45.250011 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 5 23:50:45.251720 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 5 23:50:45.257416 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 5 23:50:45.260491 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 5 23:50:45.280229 kernel: BTRFS info (device sda6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:50:45.280295 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:50:45.280307 kernel: BTRFS info (device sda6): using free space tree Sep 5 23:50:45.288434 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 5 23:50:45.288501 kernel: BTRFS info (device sda6): auto enabling async discard Sep 5 23:50:45.300964 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 5 23:50:45.302543 kernel: BTRFS info (device sda6): last unmount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:50:45.315378 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 5 23:50:45.325227 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 5 23:50:45.391410 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 23:50:45.399536 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 23:50:45.433581 ignition[692]: Ignition 2.19.0 Sep 5 23:50:45.433597 ignition[692]: Stage: fetch-offline Sep 5 23:50:45.433634 ignition[692]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:50:45.435928 systemd-networkd[769]: lo: Link UP Sep 5 23:50:45.433642 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 5 23:50:45.435932 systemd-networkd[769]: lo: Gained carrier Sep 5 23:50:45.433809 ignition[692]: parsed url from cmdline: "" Sep 5 23:50:45.436265 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 23:50:45.433812 ignition[692]: no config URL provided Sep 5 23:50:45.438034 systemd-networkd[769]: Enumeration completed Sep 5 23:50:45.433818 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" Sep 5 23:50:45.439104 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 23:50:45.433824 ignition[692]: no config at "/usr/lib/ignition/user.ign" Sep 5 23:50:45.439808 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:50:45.433829 ignition[692]: failed to fetch config: resource requires networking Sep 5 23:50:45.439811 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 23:50:45.434073 ignition[692]: Ignition finished successfully Sep 5 23:50:45.440463 systemd[1]: Reached target network.target - Network. Sep 5 23:50:45.441325 systemd-networkd[769]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:50:45.441329 systemd-networkd[769]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 23:50:45.441909 systemd-networkd[769]: eth0: Link UP Sep 5 23:50:45.441913 systemd-networkd[769]: eth0: Gained carrier Sep 5 23:50:45.441921 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:50:45.446612 systemd-networkd[769]: eth1: Link UP Sep 5 23:50:45.446616 systemd-networkd[769]: eth1: Gained carrier Sep 5 23:50:45.446630 systemd-networkd[769]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:50:45.449435 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 5 23:50:45.466693 ignition[779]: Ignition 2.19.0 Sep 5 23:50:45.466703 ignition[779]: Stage: fetch Sep 5 23:50:45.466898 ignition[779]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:50:45.466908 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 5 23:50:45.467017 ignition[779]: parsed url from cmdline: "" Sep 5 23:50:45.467020 ignition[779]: no config URL provided Sep 5 23:50:45.467025 ignition[779]: reading system config file "/usr/lib/ignition/user.ign" Sep 5 23:50:45.467033 ignition[779]: no config at "/usr/lib/ignition/user.ign" Sep 5 23:50:45.467055 ignition[779]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Sep 5 23:50:45.467762 ignition[779]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Sep 5 23:50:45.481312 systemd-networkd[769]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 5 23:50:45.494298 systemd-networkd[769]: eth0: DHCPv4 address 91.99.146.49/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 5 23:50:45.668279 ignition[779]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Sep 5 23:50:45.673251 ignition[779]: GET result: OK Sep 5 23:50:45.673361 ignition[779]: parsing config with SHA512: b1fcf85ec94a64ac8dd1928d26fc061c851d67dc4bad732b0d7e3335a4a92207c83ab69d22200e0b1a5f7a79ea86738822adc71d601123fc8aa282439d1a21cf Sep 5 23:50:45.677953 unknown[779]: fetched base config from "system" Sep 5 23:50:45.677964 unknown[779]: fetched base config from "system" Sep 5 23:50:45.678539 ignition[779]: fetch: fetch complete Sep 5 23:50:45.677969 unknown[779]: fetched user config from "hetzner" Sep 5 23:50:45.678544 ignition[779]: fetch: fetch passed Sep 5 23:50:45.678599 ignition[779]: Ignition finished successfully Sep 5 23:50:45.685313 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 5 23:50:45.695595 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 5 23:50:45.710222 ignition[786]: Ignition 2.19.0 Sep 5 23:50:45.710232 ignition[786]: Stage: kargs Sep 5 23:50:45.710477 ignition[786]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:50:45.710487 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 5 23:50:45.712956 ignition[786]: kargs: kargs passed Sep 5 23:50:45.713119 ignition[786]: Ignition finished successfully Sep 5 23:50:45.717268 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 5 23:50:45.732512 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 5 23:50:45.747029 ignition[792]: Ignition 2.19.0 Sep 5 23:50:45.747041 ignition[792]: Stage: disks Sep 5 23:50:45.748023 ignition[792]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:50:45.748040 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 5 23:50:45.751718 ignition[792]: disks: disks passed Sep 5 23:50:45.751814 ignition[792]: Ignition finished successfully Sep 5 23:50:45.754337 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 5 23:50:45.755259 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 5 23:50:45.756135 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 5 23:50:45.757405 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 23:50:45.757923 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 23:50:45.759316 systemd[1]: Reached target basic.target - Basic System. Sep 5 23:50:45.766793 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 5 23:50:45.786958 systemd-fsck[801]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 5 23:50:45.791520 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 5 23:50:45.798409 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 5 23:50:45.845219 kernel: EXT4-fs (sda9): mounted filesystem 72e55cb0-8368-4871-a3a0-8637412e72e8 r/w with ordered data mode. Quota mode: none. Sep 5 23:50:45.846181 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 5 23:50:45.847478 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 5 23:50:45.855335 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 23:50:45.859637 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 5 23:50:45.861356 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 5 23:50:45.864559 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 5 23:50:45.864605 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 23:50:45.876394 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (809) Sep 5 23:50:45.878777 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 5 23:50:45.884011 kernel: BTRFS info (device sda6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:50:45.884036 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:50:45.884048 kernel: BTRFS info (device sda6): using free space tree Sep 5 23:50:45.893407 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 5 23:50:45.899591 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 5 23:50:45.899657 kernel: BTRFS info (device sda6): auto enabling async discard Sep 5 23:50:45.906958 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 23:50:45.946113 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Sep 5 23:50:45.950910 coreos-metadata[811]: Sep 05 23:50:45.950 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Sep 5 23:50:45.952934 coreos-metadata[811]: Sep 05 23:50:45.952 INFO Fetch successful Sep 5 23:50:45.952934 coreos-metadata[811]: Sep 05 23:50:45.952 INFO wrote hostname ci-4081-3-5-n-c970465010 to /sysroot/etc/hostname Sep 5 23:50:45.956147 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 5 23:50:45.959030 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Sep 5 23:50:45.965564 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Sep 5 23:50:45.971028 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Sep 5 23:50:46.085879 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 5 23:50:46.092688 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 5 23:50:46.094541 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 5 23:50:46.103231 kernel: BTRFS info (device sda6): last unmount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:50:46.139469 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 5 23:50:46.140540 ignition[928]: INFO : Ignition 2.19.0 Sep 5 23:50:46.142374 ignition[928]: INFO : Stage: mount Sep 5 23:50:46.142374 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 23:50:46.142374 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 5 23:50:46.146558 ignition[928]: INFO : mount: mount passed Sep 5 23:50:46.146558 ignition[928]: INFO : Ignition finished successfully Sep 5 23:50:46.145890 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 5 23:50:46.152352 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 5 23:50:46.236570 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 5 23:50:46.244608 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 23:50:46.256224 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (939) Sep 5 23:50:46.258518 kernel: BTRFS info (device sda6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:50:46.258586 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:50:46.258610 kernel: BTRFS info (device sda6): using free space tree Sep 5 23:50:46.262471 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 5 23:50:46.262542 kernel: BTRFS info (device sda6): auto enabling async discard Sep 5 23:50:46.266432 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 23:50:46.292459 ignition[956]: INFO : Ignition 2.19.0 Sep 5 23:50:46.292459 ignition[956]: INFO : Stage: files Sep 5 23:50:46.294118 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 23:50:46.294118 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 5 23:50:46.294118 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Sep 5 23:50:46.298482 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 5 23:50:46.298482 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 5 23:50:46.300777 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 5 23:50:46.301831 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 5 23:50:46.303368 unknown[956]: wrote ssh authorized keys file for user: core Sep 5 23:50:46.304804 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 5 23:50:46.306010 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 5 23:50:46.307405 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 5 23:50:46.412413 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 5 23:50:46.547743 systemd-networkd[769]: eth0: Gained IPv6LL Sep 5 23:50:46.548218 systemd-networkd[769]: eth1: Gained IPv6LL Sep 5 23:50:46.783568 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 5 23:50:46.783568 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 5 23:50:46.783568 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 5 23:50:46.966973 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 5 23:50:47.040548 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 5 23:50:47.042616 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 5 23:50:47.042616 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 5 23:50:47.042616 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 5 23:50:47.042616 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 5 23:50:47.042616 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 23:50:47.042616 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 23:50:47.042616 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 23:50:47.042616 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 23:50:47.042616 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 23:50:47.042616 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 23:50:47.042616 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 5 23:50:47.042616 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 5 23:50:47.042616 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 5 23:50:47.042616 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 5 23:50:47.288428 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 5 23:50:47.476030 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 5 23:50:47.476030 ignition[956]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 5 23:50:47.479816 ignition[956]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 23:50:47.479816 ignition[956]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 23:50:47.479816 ignition[956]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 5 23:50:47.479816 ignition[956]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 5 23:50:47.479816 ignition[956]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 5 23:50:47.479816 ignition[956]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 5 23:50:47.479816 ignition[956]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 5 23:50:47.479816 ignition[956]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Sep 5 23:50:47.479816 ignition[956]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Sep 5 23:50:47.479816 ignition[956]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 5 23:50:47.479816 ignition[956]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 5 23:50:47.479816 ignition[956]: INFO : files: files passed Sep 5 23:50:47.479816 ignition[956]: INFO : Ignition finished successfully Sep 5 23:50:47.482229 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 5 23:50:47.495307 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 5 23:50:47.498391 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 5 23:50:47.500552 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 5 23:50:47.500698 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 5 23:50:47.522143 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 23:50:47.522143 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 5 23:50:47.525301 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 23:50:47.527062 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 23:50:47.529404 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 5 23:50:47.534406 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 5 23:50:47.576951 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 5 23:50:47.577159 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 5 23:50:47.579644 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 5 23:50:47.580512 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 5 23:50:47.581561 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 5 23:50:47.585419 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 5 23:50:47.601453 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 23:50:47.611560 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 5 23:50:47.625558 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 5 23:50:47.627231 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 23:50:47.628866 systemd[1]: Stopped target timers.target - Timer Units. Sep 5 23:50:47.630557 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 5 23:50:47.630917 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 23:50:47.633219 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 5 23:50:47.633963 systemd[1]: Stopped target basic.target - Basic System. Sep 5 23:50:47.634918 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 5 23:50:47.635992 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 23:50:47.637120 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 5 23:50:47.638211 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 5 23:50:47.639325 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 23:50:47.640543 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 5 23:50:47.641662 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 5 23:50:47.642680 systemd[1]: Stopped target swap.target - Swaps. Sep 5 23:50:47.643540 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 5 23:50:47.643714 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 5 23:50:47.644968 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 5 23:50:47.646131 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 23:50:47.647255 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 5 23:50:47.647809 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 23:50:47.648722 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 5 23:50:47.648896 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 5 23:50:47.650516 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 5 23:50:47.650703 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 23:50:47.651689 systemd[1]: ignition-files.service: Deactivated successfully. Sep 5 23:50:47.651847 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 5 23:50:47.652725 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 5 23:50:47.652878 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 5 23:50:47.659467 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 5 23:50:47.663433 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 5 23:50:47.663944 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 5 23:50:47.664083 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 23:50:47.665112 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 5 23:50:47.667404 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 23:50:47.677352 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 5 23:50:47.677463 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 5 23:50:47.685899 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 5 23:50:47.687135 ignition[1010]: INFO : Ignition 2.19.0 Sep 5 23:50:47.687135 ignition[1010]: INFO : Stage: umount Sep 5 23:50:47.687135 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 23:50:47.687135 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 5 23:50:47.690292 ignition[1010]: INFO : umount: umount passed Sep 5 23:50:47.690292 ignition[1010]: INFO : Ignition finished successfully Sep 5 23:50:47.692963 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 5 23:50:47.693103 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 5 23:50:47.696593 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 5 23:50:47.696653 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 5 23:50:47.699042 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 5 23:50:47.699134 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 5 23:50:47.700755 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 5 23:50:47.700820 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 5 23:50:47.702763 systemd[1]: Stopped target network.target - Network. Sep 5 23:50:47.707311 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 5 23:50:47.707402 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 23:50:47.709581 systemd[1]: Stopped target paths.target - Path Units. Sep 5 23:50:47.713596 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 5 23:50:47.717272 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 23:50:47.718290 systemd[1]: Stopped target slices.target - Slice Units. Sep 5 23:50:47.718800 systemd[1]: Stopped target sockets.target - Socket Units. Sep 5 23:50:47.720019 systemd[1]: iscsid.socket: Deactivated successfully. Sep 5 23:50:47.720066 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 23:50:47.723050 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 5 23:50:47.723098 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 23:50:47.724104 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 5 23:50:47.724170 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 5 23:50:47.725575 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 5 23:50:47.725627 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 5 23:50:47.727022 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 5 23:50:47.728230 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 5 23:50:47.732255 systemd-networkd[769]: eth0: DHCPv6 lease lost Sep 5 23:50:47.735330 systemd-networkd[769]: eth1: DHCPv6 lease lost Sep 5 23:50:47.736527 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 5 23:50:47.736623 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 5 23:50:47.739673 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 5 23:50:47.740075 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 5 23:50:47.742499 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 5 23:50:47.742550 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 5 23:50:47.744242 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 5 23:50:47.744304 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 5 23:50:47.755928 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 5 23:50:47.757448 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 5 23:50:47.757576 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 23:50:47.758914 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 23:50:47.760168 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 5 23:50:47.761527 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 5 23:50:47.772611 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 23:50:47.772746 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 23:50:47.775645 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 5 23:50:47.775726 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 5 23:50:47.778029 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 5 23:50:47.778100 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 23:50:47.781812 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 5 23:50:47.782018 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 23:50:47.784780 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 5 23:50:47.785545 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 5 23:50:47.786801 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 5 23:50:47.786868 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 5 23:50:47.787883 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 5 23:50:47.787920 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 23:50:47.789259 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 5 23:50:47.789311 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 5 23:50:47.790931 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 5 23:50:47.790992 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 5 23:50:47.792655 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 23:50:47.792703 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:50:47.799395 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 5 23:50:47.800001 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 5 23:50:47.800072 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 23:50:47.801929 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 5 23:50:47.801994 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 23:50:47.803531 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 5 23:50:47.803581 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 23:50:47.804319 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 23:50:47.804361 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:50:47.811167 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 5 23:50:47.811413 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 5 23:50:47.812859 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 5 23:50:47.819426 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 5 23:50:47.830660 systemd[1]: Switching root. Sep 5 23:50:47.871082 systemd-journald[237]: Journal stopped Sep 5 23:50:48.810795 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Sep 5 23:50:48.810878 kernel: SELinux: policy capability network_peer_controls=1 Sep 5 23:50:48.810891 kernel: SELinux: policy capability open_perms=1 Sep 5 23:50:48.810901 kernel: SELinux: policy capability extended_socket_class=1 Sep 5 23:50:48.810911 kernel: SELinux: policy capability always_check_network=0 Sep 5 23:50:48.810924 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 5 23:50:48.810934 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 5 23:50:48.810943 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 5 23:50:48.810957 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 5 23:50:48.810984 kernel: audit: type=1403 audit(1757116248.003:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 5 23:50:48.811001 systemd[1]: Successfully loaded SELinux policy in 42.182ms. Sep 5 23:50:48.811022 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.662ms. Sep 5 23:50:48.811034 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 23:50:48.811047 systemd[1]: Detected virtualization kvm. Sep 5 23:50:48.811058 systemd[1]: Detected architecture arm64. Sep 5 23:50:48.811069 systemd[1]: Detected first boot. Sep 5 23:50:48.811083 systemd[1]: Hostname set to . Sep 5 23:50:48.811093 systemd[1]: Initializing machine ID from VM UUID. Sep 5 23:50:48.811104 zram_generator::config[1053]: No configuration found. Sep 5 23:50:48.811124 systemd[1]: Populated /etc with preset unit settings. Sep 5 23:50:48.811134 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 5 23:50:48.811147 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 5 23:50:48.811158 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 5 23:50:48.811169 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 5 23:50:48.811180 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 5 23:50:48.814249 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 5 23:50:48.814277 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 5 23:50:48.814289 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 5 23:50:48.814300 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 5 23:50:48.814311 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 5 23:50:48.814335 systemd[1]: Created slice user.slice - User and Session Slice. Sep 5 23:50:48.814346 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 23:50:48.814358 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 23:50:48.814368 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 5 23:50:48.814378 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 5 23:50:48.814389 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 5 23:50:48.814400 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 23:50:48.814411 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 5 23:50:48.814423 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 23:50:48.814434 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 5 23:50:48.814444 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 5 23:50:48.814455 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 5 23:50:48.814466 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 5 23:50:48.814479 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 23:50:48.814490 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 23:50:48.814503 systemd[1]: Reached target slices.target - Slice Units. Sep 5 23:50:48.814514 systemd[1]: Reached target swap.target - Swaps. Sep 5 23:50:48.814524 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 5 23:50:48.814534 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 5 23:50:48.814545 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 23:50:48.814555 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 23:50:48.814566 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 23:50:48.814576 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 5 23:50:48.814587 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 5 23:50:48.814598 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 5 23:50:48.814609 systemd[1]: Mounting media.mount - External Media Directory... Sep 5 23:50:48.814619 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 5 23:50:48.814649 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 5 23:50:48.814661 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 5 23:50:48.814672 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 5 23:50:48.814686 systemd[1]: Reached target machines.target - Containers. Sep 5 23:50:48.814700 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 5 23:50:48.814711 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 23:50:48.814721 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 23:50:48.814733 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 5 23:50:48.814743 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 23:50:48.814754 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 23:50:48.814765 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 23:50:48.814777 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 5 23:50:48.814787 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 23:50:48.814798 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 5 23:50:48.814809 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 5 23:50:48.814819 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 5 23:50:48.814829 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 5 23:50:48.814840 systemd[1]: Stopped systemd-fsck-usr.service. Sep 5 23:50:48.814851 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 23:50:48.814861 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 23:50:48.814874 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 23:50:48.814885 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 5 23:50:48.814895 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 23:50:48.814906 systemd[1]: verity-setup.service: Deactivated successfully. Sep 5 23:50:48.814916 systemd[1]: Stopped verity-setup.service. Sep 5 23:50:48.814926 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 5 23:50:48.814941 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 5 23:50:48.814952 systemd[1]: Mounted media.mount - External Media Directory. Sep 5 23:50:48.815004 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 5 23:50:48.815017 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 5 23:50:48.815027 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 5 23:50:48.815038 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 23:50:48.815048 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 5 23:50:48.815061 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 5 23:50:48.815072 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 23:50:48.815083 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 23:50:48.815094 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 5 23:50:48.815104 kernel: fuse: init (API version 7.39) Sep 5 23:50:48.815117 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 23:50:48.815130 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 23:50:48.815141 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 5 23:50:48.815152 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 5 23:50:48.815163 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 5 23:50:48.815173 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 23:50:48.815183 kernel: ACPI: bus type drm_connector registered Sep 5 23:50:48.815217 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 5 23:50:48.815228 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 5 23:50:48.815241 kernel: loop: module loaded Sep 5 23:50:48.815251 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 5 23:50:48.815262 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 23:50:48.815307 systemd-journald[1123]: Collecting audit messages is disabled. Sep 5 23:50:48.815330 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 5 23:50:48.815341 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 23:50:48.815352 systemd-journald[1123]: Journal started Sep 5 23:50:48.815377 systemd-journald[1123]: Runtime Journal (/run/log/journal/c7bc1e56bd654bf8857284669a22eeac) is 8.0M, max 76.6M, 68.6M free. Sep 5 23:50:48.511391 systemd[1]: Queued start job for default target multi-user.target. Sep 5 23:50:48.539368 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 5 23:50:48.539783 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 5 23:50:48.820238 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 5 23:50:48.827313 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 5 23:50:48.827406 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 23:50:48.831216 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 23:50:48.832358 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 23:50:48.832538 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 23:50:48.833468 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 5 23:50:48.833625 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 5 23:50:48.835160 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 23:50:48.835331 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 23:50:48.838239 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 23:50:48.839268 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 23:50:48.840535 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 5 23:50:48.843471 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 5 23:50:48.877974 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 5 23:50:48.881199 kernel: loop0: detected capacity change from 0 to 114432 Sep 5 23:50:48.887802 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 5 23:50:48.889941 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 23:50:48.901152 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 5 23:50:48.909465 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 5 23:50:48.913862 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 5 23:50:48.914716 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 23:50:48.926397 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 23:50:48.930622 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 5 23:50:48.944804 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 5 23:50:48.954725 systemd-journald[1123]: Time spent on flushing to /var/log/journal/c7bc1e56bd654bf8857284669a22eeac is 75.147ms for 1134 entries. Sep 5 23:50:48.954725 systemd-journald[1123]: System Journal (/var/log/journal/c7bc1e56bd654bf8857284669a22eeac) is 8.0M, max 584.8M, 576.8M free. Sep 5 23:50:49.053448 systemd-journald[1123]: Received client request to flush runtime journal. Sep 5 23:50:49.053503 kernel: loop1: detected capacity change from 0 to 114328 Sep 5 23:50:49.053517 kernel: loop2: detected capacity change from 0 to 8 Sep 5 23:50:48.967454 systemd-tmpfiles[1148]: ACLs are not supported, ignoring. Sep 5 23:50:49.062240 kernel: loop3: detected capacity change from 0 to 203944 Sep 5 23:50:48.967465 systemd-tmpfiles[1148]: ACLs are not supported, ignoring. Sep 5 23:50:48.981399 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 5 23:50:48.990657 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 5 23:50:48.994787 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 23:50:49.008520 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 5 23:50:49.014312 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 23:50:49.024436 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 5 23:50:49.029320 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 23:50:49.048873 udevadm[1185]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 5 23:50:49.056417 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 5 23:50:49.075420 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 5 23:50:49.085044 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 23:50:49.101484 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Sep 5 23:50:49.101810 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Sep 5 23:50:49.108446 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 23:50:49.111209 kernel: loop4: detected capacity change from 0 to 114432 Sep 5 23:50:49.123232 kernel: loop5: detected capacity change from 0 to 114328 Sep 5 23:50:49.141377 kernel: loop6: detected capacity change from 0 to 8 Sep 5 23:50:49.144425 kernel: loop7: detected capacity change from 0 to 203944 Sep 5 23:50:49.167710 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Sep 5 23:50:49.169863 (sd-merge)[1195]: Merged extensions into '/usr'. Sep 5 23:50:49.178870 systemd[1]: Reloading requested from client PID 1147 ('systemd-sysext') (unit systemd-sysext.service)... Sep 5 23:50:49.178888 systemd[1]: Reloading... Sep 5 23:50:49.288556 zram_generator::config[1222]: No configuration found. Sep 5 23:50:49.431809 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:50:49.479552 systemd[1]: Reloading finished in 299 ms. Sep 5 23:50:49.508239 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 5 23:50:49.517342 systemd[1]: Starting ensure-sysext.service... Sep 5 23:50:49.520610 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 23:50:49.538353 systemd[1]: Reloading requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... Sep 5 23:50:49.541345 ldconfig[1144]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 5 23:50:49.538371 systemd[1]: Reloading... Sep 5 23:50:49.557067 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 5 23:50:49.558262 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 5 23:50:49.559015 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 5 23:50:49.559612 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Sep 5 23:50:49.559721 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Sep 5 23:50:49.562701 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 23:50:49.563009 systemd-tmpfiles[1259]: Skipping /boot Sep 5 23:50:49.573125 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 23:50:49.573292 systemd-tmpfiles[1259]: Skipping /boot Sep 5 23:50:49.626218 zram_generator::config[1287]: No configuration found. Sep 5 23:50:49.735567 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:50:49.782915 systemd[1]: Reloading finished in 244 ms. Sep 5 23:50:49.799946 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 5 23:50:49.802831 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 5 23:50:49.810298 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 23:50:49.826607 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 23:50:49.831548 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 5 23:50:49.844600 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 5 23:50:49.848829 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 23:50:49.851918 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 23:50:49.854242 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 5 23:50:49.857824 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 23:50:49.865526 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 23:50:49.869137 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 23:50:49.873365 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 23:50:49.874521 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 23:50:49.879076 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 23:50:49.879276 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 23:50:49.885369 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 5 23:50:49.891680 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 23:50:49.894247 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 23:50:49.897542 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 23:50:49.901249 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 23:50:49.902301 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 23:50:49.906475 systemd[1]: Finished ensure-sysext.service. Sep 5 23:50:49.918547 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 5 23:50:49.922525 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 5 23:50:49.923938 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 23:50:49.924092 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 23:50:49.926503 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 23:50:49.932575 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 23:50:49.942836 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 23:50:49.944268 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 23:50:49.945916 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 23:50:49.945996 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 23:50:49.952169 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 5 23:50:49.956871 systemd-udevd[1338]: Using default interface naming scheme 'v255'. Sep 5 23:50:49.962335 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 5 23:50:49.988292 augenrules[1362]: No rules Sep 5 23:50:49.990521 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 5 23:50:49.993111 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 23:50:49.999307 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 5 23:50:50.004096 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 23:50:50.012618 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 23:50:50.025784 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 5 23:50:50.029518 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 23:50:50.119683 systemd-networkd[1377]: lo: Link UP Sep 5 23:50:50.120928 systemd-networkd[1377]: lo: Gained carrier Sep 5 23:50:50.121870 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 5 23:50:50.122437 systemd-networkd[1377]: Enumeration completed Sep 5 23:50:50.122775 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 23:50:50.124021 systemd[1]: Reached target time-set.target - System Time Set. Sep 5 23:50:50.131659 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 5 23:50:50.133222 systemd-resolved[1337]: Positive Trust Anchors: Sep 5 23:50:50.133244 systemd-resolved[1337]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 23:50:50.133280 systemd-resolved[1337]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 23:50:50.138987 systemd-resolved[1337]: Using system hostname 'ci-4081-3-5-n-c970465010'. Sep 5 23:50:50.143419 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 23:50:50.144415 systemd[1]: Reached target network.target - Network. Sep 5 23:50:50.144911 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 23:50:50.168022 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 5 23:50:50.253513 systemd-networkd[1377]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:50:50.253752 systemd-networkd[1377]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 23:50:50.255579 systemd-networkd[1377]: eth0: Link UP Sep 5 23:50:50.255682 systemd-networkd[1377]: eth0: Gained carrier Sep 5 23:50:50.255743 systemd-networkd[1377]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:50:50.272274 kernel: mousedev: PS/2 mouse device common for all mice Sep 5 23:50:50.303165 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Sep 5 23:50:50.304445 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 23:50:50.329395 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1391) Sep 5 23:50:50.329581 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 23:50:50.335638 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Sep 5 23:50:50.335709 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 5 23:50:50.335723 kernel: [drm] features: -context_init Sep 5 23:50:50.335414 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 23:50:50.342706 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 23:50:50.343430 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 23:50:50.343474 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 23:50:50.348571 systemd-networkd[1377]: eth0: DHCPv4 address 91.99.146.49/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 5 23:50:50.348806 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 23:50:50.349008 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 23:50:50.351733 systemd-timesyncd[1349]: Network configuration changed, trying to establish connection. Sep 5 23:50:50.352101 systemd-networkd[1377]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:50:50.352108 systemd-networkd[1377]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 23:50:50.354395 systemd-networkd[1377]: eth1: Link UP Sep 5 23:50:50.354400 systemd-networkd[1377]: eth1: Gained carrier Sep 5 23:50:50.354422 systemd-networkd[1377]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:50:50.356060 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 23:50:50.359988 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 23:50:50.364637 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 23:50:50.366421 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 23:50:50.379263 kernel: [drm] number of scanouts: 1 Sep 5 23:50:50.379335 kernel: [drm] number of cap sets: 0 Sep 5 23:50:50.382335 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 23:50:50.382447 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 23:50:50.397224 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Sep 5 23:50:50.405744 systemd-networkd[1377]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 5 23:50:50.416213 kernel: Console: switching to colour frame buffer device 160x50 Sep 5 23:50:50.420668 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:50:50.427703 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 5 23:50:50.443166 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 5 23:50:50.450563 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 5 23:50:50.459047 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 23:50:50.459534 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:50:50.463532 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:50:50.472759 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 5 23:50:50.492663 systemd-timesyncd[1349]: Contacted time server 78.47.168.188:123 (0.flatcar.pool.ntp.org). Sep 5 23:50:50.492736 systemd-timesyncd[1349]: Initial clock synchronization to Fri 2025-09-05 23:50:50.299611 UTC. Sep 5 23:50:50.538288 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:50:50.586098 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 5 23:50:50.593572 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 5 23:50:50.609378 lvm[1442]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 23:50:50.640864 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 5 23:50:50.643385 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 23:50:50.644490 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 23:50:50.645409 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 5 23:50:50.646161 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 5 23:50:50.647149 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 5 23:50:50.647965 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 5 23:50:50.648743 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 5 23:50:50.649492 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 5 23:50:50.649536 systemd[1]: Reached target paths.target - Path Units. Sep 5 23:50:50.650066 systemd[1]: Reached target timers.target - Timer Units. Sep 5 23:50:50.651882 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 5 23:50:50.654587 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 5 23:50:50.661926 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 5 23:50:50.666564 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 5 23:50:50.667869 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 5 23:50:50.668651 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 23:50:50.669235 systemd[1]: Reached target basic.target - Basic System. Sep 5 23:50:50.669813 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 5 23:50:50.669849 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 5 23:50:50.674420 systemd[1]: Starting containerd.service - containerd container runtime... Sep 5 23:50:50.679498 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 5 23:50:50.680092 lvm[1446]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 23:50:50.687489 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 5 23:50:50.692825 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 5 23:50:50.696444 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 5 23:50:50.697120 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 5 23:50:50.698459 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 5 23:50:50.704395 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 5 23:50:50.707131 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Sep 5 23:50:50.711421 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 5 23:50:50.716322 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 5 23:50:50.722423 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 5 23:50:50.723885 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 5 23:50:50.725487 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 5 23:50:50.726409 systemd[1]: Starting update-engine.service - Update Engine... Sep 5 23:50:50.728520 jq[1450]: false Sep 5 23:50:50.729434 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 5 23:50:50.733017 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 5 23:50:50.736822 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 5 23:50:50.737306 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 5 23:50:50.756247 jq[1461]: true Sep 5 23:50:50.773426 dbus-daemon[1449]: [system] SELinux support is enabled Sep 5 23:50:50.774337 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 5 23:50:50.779867 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 5 23:50:50.779909 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 5 23:50:50.791707 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 5 23:50:50.793291 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 5 23:50:50.805524 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 5 23:50:50.806967 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 5 23:50:50.829755 jq[1474]: true Sep 5 23:50:50.834730 systemd[1]: motdgen.service: Deactivated successfully. Sep 5 23:50:50.839558 coreos-metadata[1448]: Sep 05 23:50:50.832 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Sep 5 23:50:50.834938 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 5 23:50:50.836215 (ntainerd)[1481]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 5 23:50:50.852811 coreos-metadata[1448]: Sep 05 23:50:50.845 INFO Fetch successful Sep 5 23:50:50.852811 coreos-metadata[1448]: Sep 05 23:50:50.846 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Sep 5 23:50:50.852811 coreos-metadata[1448]: Sep 05 23:50:50.850 INFO Fetch successful Sep 5 23:50:50.862303 extend-filesystems[1453]: Found loop4 Sep 5 23:50:50.862303 extend-filesystems[1453]: Found loop5 Sep 5 23:50:50.862303 extend-filesystems[1453]: Found loop6 Sep 5 23:50:50.862303 extend-filesystems[1453]: Found loop7 Sep 5 23:50:50.862303 extend-filesystems[1453]: Found sda Sep 5 23:50:50.862303 extend-filesystems[1453]: Found sda1 Sep 5 23:50:50.862303 extend-filesystems[1453]: Found sda2 Sep 5 23:50:50.862303 extend-filesystems[1453]: Found sda3 Sep 5 23:50:50.862303 extend-filesystems[1453]: Found usr Sep 5 23:50:50.862303 extend-filesystems[1453]: Found sda4 Sep 5 23:50:50.862303 extend-filesystems[1453]: Found sda6 Sep 5 23:50:50.862303 extend-filesystems[1453]: Found sda7 Sep 5 23:50:50.862303 extend-filesystems[1453]: Found sda9 Sep 5 23:50:50.862303 extend-filesystems[1453]: Checking size of /dev/sda9 Sep 5 23:50:50.941047 extend-filesystems[1453]: Resized partition /dev/sda9 Sep 5 23:50:50.946516 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Sep 5 23:50:50.946573 update_engine[1460]: I20250905 23:50:50.863741 1460 main.cc:92] Flatcar Update Engine starting Sep 5 23:50:50.946573 update_engine[1460]: I20250905 23:50:50.884369 1460 update_check_scheduler.cc:74] Next update check in 11m30s Sep 5 23:50:50.946783 tar[1480]: linux-arm64/helm Sep 5 23:50:50.879796 systemd[1]: Started update-engine.service - Update Engine. Sep 5 23:50:50.947010 extend-filesystems[1505]: resize2fs 1.47.1 (20-May-2024) Sep 5 23:50:50.884417 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 5 23:50:50.963373 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 5 23:50:50.965143 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 5 23:50:50.986983 systemd-logind[1459]: New seat seat0. Sep 5 23:50:51.010213 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1386) Sep 5 23:50:51.014362 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 5 23:50:51.025511 bash[1518]: Updated "/home/core/.ssh/authorized_keys" Sep 5 23:50:51.035949 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 5 23:50:51.036352 systemd-logind[1459]: Watching system buttons on /dev/input/event0 (Power Button) Sep 5 23:50:51.036369 systemd-logind[1459]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Sep 5 23:50:51.037795 systemd[1]: Started systemd-logind.service - User Login Management. Sep 5 23:50:51.060722 systemd[1]: Starting sshkeys.service... Sep 5 23:50:51.111980 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Sep 5 23:50:51.115232 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 5 23:50:51.121882 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 5 23:50:51.131623 extend-filesystems[1505]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Sep 5 23:50:51.131623 extend-filesystems[1505]: old_desc_blocks = 1, new_desc_blocks = 5 Sep 5 23:50:51.131623 extend-filesystems[1505]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Sep 5 23:50:51.135652 extend-filesystems[1453]: Resized filesystem in /dev/sda9 Sep 5 23:50:51.135652 extend-filesystems[1453]: Found sr0 Sep 5 23:50:51.135457 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 5 23:50:51.135680 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 5 23:50:51.152242 coreos-metadata[1531]: Sep 05 23:50:51.150 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Sep 5 23:50:51.152242 coreos-metadata[1531]: Sep 05 23:50:51.151 INFO Fetch successful Sep 5 23:50:51.151455 locksmithd[1492]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 5 23:50:51.155295 unknown[1531]: wrote ssh authorized keys file for user: core Sep 5 23:50:51.205443 update-ssh-keys[1537]: Updated "/home/core/.ssh/authorized_keys" Sep 5 23:50:51.206464 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 5 23:50:51.210406 systemd[1]: Finished sshkeys.service. Sep 5 23:50:51.262630 containerd[1481]: time="2025-09-05T23:50:51.262527419Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 5 23:50:51.328491 containerd[1481]: time="2025-09-05T23:50:51.328384488Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:50:51.334243 containerd[1481]: time="2025-09-05T23:50:51.333393161Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.103-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:50:51.334243 containerd[1481]: time="2025-09-05T23:50:51.333434812Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 5 23:50:51.334243 containerd[1481]: time="2025-09-05T23:50:51.333453080Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 5 23:50:51.334243 containerd[1481]: time="2025-09-05T23:50:51.333613360Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 5 23:50:51.334243 containerd[1481]: time="2025-09-05T23:50:51.333676011Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 5 23:50:51.334243 containerd[1481]: time="2025-09-05T23:50:51.333756776Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:50:51.334243 containerd[1481]: time="2025-09-05T23:50:51.333771999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:50:51.334243 containerd[1481]: time="2025-09-05T23:50:51.333931928Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:50:51.334243 containerd[1481]: time="2025-09-05T23:50:51.333953124Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 5 23:50:51.334243 containerd[1481]: time="2025-09-05T23:50:51.333966279Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:50:51.334243 containerd[1481]: time="2025-09-05T23:50:51.333979590Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 5 23:50:51.334500 containerd[1481]: time="2025-09-05T23:50:51.334052859Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:50:51.334679 containerd[1481]: time="2025-09-05T23:50:51.334646978Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:50:51.334908 containerd[1481]: time="2025-09-05T23:50:51.334843209Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:50:51.337577 containerd[1481]: time="2025-09-05T23:50:51.337211995Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 5 23:50:51.337577 containerd[1481]: time="2025-09-05T23:50:51.337356387Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 5 23:50:51.337577 containerd[1481]: time="2025-09-05T23:50:51.337397725Z" level=info msg="metadata content store policy set" policy=shared Sep 5 23:50:51.343942 containerd[1481]: time="2025-09-05T23:50:51.343489748Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 5 23:50:51.343942 containerd[1481]: time="2025-09-05T23:50:51.343552360Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 5 23:50:51.343942 containerd[1481]: time="2025-09-05T23:50:51.343574611Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 5 23:50:51.343942 containerd[1481]: time="2025-09-05T23:50:51.343596744Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 5 23:50:51.343942 containerd[1481]: time="2025-09-05T23:50:51.343611499Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 5 23:50:51.343942 containerd[1481]: time="2025-09-05T23:50:51.343778961Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 5 23:50:51.344125 containerd[1481]: time="2025-09-05T23:50:51.344070555Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 5 23:50:51.345238 containerd[1481]: time="2025-09-05T23:50:51.345202699Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 5 23:50:51.345273 containerd[1481]: time="2025-09-05T23:50:51.345242789Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 5 23:50:51.345273 containerd[1481]: time="2025-09-05T23:50:51.345258715Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 5 23:50:51.345308 containerd[1481]: time="2025-09-05T23:50:51.345295760Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 5 23:50:51.345326 containerd[1481]: time="2025-09-05T23:50:51.345310593Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 5 23:50:51.345356 containerd[1481]: time="2025-09-05T23:50:51.345324217Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 5 23:50:51.345375 containerd[1481]: time="2025-09-05T23:50:51.345354938Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 5 23:50:51.345393 containerd[1481]: time="2025-09-05T23:50:51.345375002Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 5 23:50:51.345393 containerd[1481]: time="2025-09-05T23:50:51.345388976Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 5 23:50:51.345441 containerd[1481]: time="2025-09-05T23:50:51.345402053Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 5 23:50:51.345441 containerd[1481]: time="2025-09-05T23:50:51.345437302Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 5 23:50:51.345476 containerd[1481]: time="2025-09-05T23:50:51.345461270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 5 23:50:51.345498 containerd[1481]: time="2025-09-05T23:50:51.345481178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 5 23:50:51.345648 containerd[1481]: time="2025-09-05T23:50:51.345624789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 5 23:50:51.345677 containerd[1481]: time="2025-09-05T23:50:51.345654183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 5 23:50:51.345677 containerd[1481]: time="2025-09-05T23:50:51.345667923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 5 23:50:51.346019 containerd[1481]: time="2025-09-05T23:50:51.345682015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 5 23:50:51.346044 containerd[1481]: time="2025-09-05T23:50:51.346032241Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 5 23:50:51.346071 containerd[1481]: time="2025-09-05T23:50:51.346050041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 5 23:50:51.346090 containerd[1481]: time="2025-09-05T23:50:51.346079708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 5 23:50:51.346108 containerd[1481]: time="2025-09-05T23:50:51.346098172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 5 23:50:51.346127 containerd[1481]: time="2025-09-05T23:50:51.346111912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 5 23:50:51.346127 containerd[1481]: time="2025-09-05T23:50:51.346123740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 5 23:50:51.346176 containerd[1481]: time="2025-09-05T23:50:51.346139940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 5 23:50:51.346176 containerd[1481]: time="2025-09-05T23:50:51.346170660Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 5 23:50:51.346559 containerd[1481]: time="2025-09-05T23:50:51.346534861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 5 23:50:51.346640 containerd[1481]: time="2025-09-05T23:50:51.346621676Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 5 23:50:51.346665 containerd[1481]: time="2025-09-05T23:50:51.346644746Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 5 23:50:51.350212 containerd[1481]: time="2025-09-05T23:50:51.348122392Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 5 23:50:51.350212 containerd[1481]: time="2025-09-05T23:50:51.348212212Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 5 23:50:51.350212 containerd[1481]: time="2025-09-05T23:50:51.348231574Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 5 23:50:51.350212 containerd[1481]: time="2025-09-05T23:50:51.348552133Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 5 23:50:51.350212 containerd[1481]: time="2025-09-05T23:50:51.348602020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 5 23:50:51.350212 containerd[1481]: time="2025-09-05T23:50:51.349743728Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 5 23:50:51.350212 containerd[1481]: time="2025-09-05T23:50:51.349772770Z" level=info msg="NRI interface is disabled by configuration." Sep 5 23:50:51.350212 containerd[1481]: time="2025-09-05T23:50:51.349800290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 5 23:50:51.350977 containerd[1481]: time="2025-09-05T23:50:51.350649076Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 5 23:50:51.351093 containerd[1481]: time="2025-09-05T23:50:51.350997428Z" level=info msg="Connect containerd service" Sep 5 23:50:51.351093 containerd[1481]: time="2025-09-05T23:50:51.351054927Z" level=info msg="using legacy CRI server" Sep 5 23:50:51.351093 containerd[1481]: time="2025-09-05T23:50:51.351069487Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 5 23:50:51.351558 containerd[1481]: time="2025-09-05T23:50:51.351531198Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 5 23:50:51.356057 containerd[1481]: time="2025-09-05T23:50:51.354461703Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 23:50:51.356057 containerd[1481]: time="2025-09-05T23:50:51.354671636Z" level=info msg="Start subscribing containerd event" Sep 5 23:50:51.356057 containerd[1481]: time="2025-09-05T23:50:51.354730384Z" level=info msg="Start recovering state" Sep 5 23:50:51.356057 containerd[1481]: time="2025-09-05T23:50:51.354804356Z" level=info msg="Start event monitor" Sep 5 23:50:51.356057 containerd[1481]: time="2025-09-05T23:50:51.354814857Z" level=info msg="Start snapshots syncer" Sep 5 23:50:51.356057 containerd[1481]: time="2025-09-05T23:50:51.354824499Z" level=info msg="Start cni network conf syncer for default" Sep 5 23:50:51.356057 containerd[1481]: time="2025-09-05T23:50:51.354831915Z" level=info msg="Start streaming server" Sep 5 23:50:51.356057 containerd[1481]: time="2025-09-05T23:50:51.354935788Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 5 23:50:51.356057 containerd[1481]: time="2025-09-05T23:50:51.354974316Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 5 23:50:51.355134 systemd[1]: Started containerd.service - containerd container runtime. Sep 5 23:50:51.357635 containerd[1481]: time="2025-09-05T23:50:51.357595856Z" level=info msg="containerd successfully booted in 0.098115s" Sep 5 23:50:51.457265 sshd_keygen[1472]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 5 23:50:51.481261 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 5 23:50:51.492515 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 5 23:50:51.494543 systemd[1]: Started sshd@0-91.99.146.49:22-139.178.68.195:40488.service - OpenSSH per-connection server daemon (139.178.68.195:40488). Sep 5 23:50:51.509074 systemd[1]: issuegen.service: Deactivated successfully. Sep 5 23:50:51.512865 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 5 23:50:51.524671 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 5 23:50:51.545094 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 5 23:50:51.555469 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 5 23:50:51.566731 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 5 23:50:51.568575 systemd[1]: Reached target getty.target - Login Prompts. Sep 5 23:50:51.571395 tar[1480]: linux-arm64/LICENSE Sep 5 23:50:51.571395 tar[1480]: linux-arm64/README.md Sep 5 23:50:51.592298 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 5 23:50:51.603394 systemd-networkd[1377]: eth1: Gained IPv6LL Sep 5 23:50:51.604538 systemd-networkd[1377]: eth0: Gained IPv6LL Sep 5 23:50:51.608611 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 5 23:50:51.611675 systemd[1]: Reached target network-online.target - Network is Online. Sep 5 23:50:51.618490 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:50:51.621950 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 5 23:50:51.658238 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 5 23:50:52.550631 sshd[1552]: Accepted publickey for core from 139.178.68.195 port 40488 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:50:52.551364 sshd[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:50:52.564973 systemd-logind[1459]: New session 1 of user core. Sep 5 23:50:52.566587 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 5 23:50:52.576019 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 5 23:50:52.594681 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 5 23:50:52.606669 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 5 23:50:52.611671 (systemd)[1578]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 5 23:50:52.638160 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:50:52.639343 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 5 23:50:52.652749 (kubelet)[1587]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:50:52.729820 systemd[1578]: Queued start job for default target default.target. Sep 5 23:50:52.741535 systemd[1578]: Created slice app.slice - User Application Slice. Sep 5 23:50:52.741740 systemd[1578]: Reached target paths.target - Paths. Sep 5 23:50:52.741765 systemd[1578]: Reached target timers.target - Timers. Sep 5 23:50:52.750718 systemd[1578]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 5 23:50:52.760221 systemd[1578]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 5 23:50:52.760402 systemd[1578]: Reached target sockets.target - Sockets. Sep 5 23:50:52.760474 systemd[1578]: Reached target basic.target - Basic System. Sep 5 23:50:52.760613 systemd[1578]: Reached target default.target - Main User Target. Sep 5 23:50:52.760647 systemd[1578]: Startup finished in 141ms. Sep 5 23:50:52.761601 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 5 23:50:52.768424 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 5 23:50:52.771870 systemd[1]: Startup finished in 815ms (kernel) + 5.301s (initrd) + 4.810s (userspace) = 10.927s. Sep 5 23:50:53.277512 kubelet[1587]: E0905 23:50:53.277415 1587 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:50:53.281716 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:50:53.282043 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:50:53.494595 systemd[1]: Started sshd@1-91.99.146.49:22-139.178.68.195:40502.service - OpenSSH per-connection server daemon (139.178.68.195:40502). Sep 5 23:50:54.478865 sshd[1605]: Accepted publickey for core from 139.178.68.195 port 40502 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:50:54.482331 sshd[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:50:54.488235 systemd-logind[1459]: New session 2 of user core. Sep 5 23:50:54.496570 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 5 23:50:55.160505 sshd[1605]: pam_unix(sshd:session): session closed for user core Sep 5 23:50:55.166445 systemd[1]: sshd@1-91.99.146.49:22-139.178.68.195:40502.service: Deactivated successfully. Sep 5 23:50:55.171620 systemd[1]: session-2.scope: Deactivated successfully. Sep 5 23:50:55.172810 systemd-logind[1459]: Session 2 logged out. Waiting for processes to exit. Sep 5 23:50:55.174040 systemd-logind[1459]: Removed session 2. Sep 5 23:50:55.331114 systemd[1]: Started sshd@2-91.99.146.49:22-139.178.68.195:40506.service - OpenSSH per-connection server daemon (139.178.68.195:40506). Sep 5 23:50:56.312638 sshd[1612]: Accepted publickey for core from 139.178.68.195 port 40506 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:50:56.314668 sshd[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:50:56.322474 systemd-logind[1459]: New session 3 of user core. Sep 5 23:50:56.329536 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 5 23:50:56.994516 sshd[1612]: pam_unix(sshd:session): session closed for user core Sep 5 23:50:56.999071 systemd[1]: sshd@2-91.99.146.49:22-139.178.68.195:40506.service: Deactivated successfully. Sep 5 23:50:56.999241 systemd-logind[1459]: Session 3 logged out. Waiting for processes to exit. Sep 5 23:50:57.001976 systemd[1]: session-3.scope: Deactivated successfully. Sep 5 23:50:57.003919 systemd-logind[1459]: Removed session 3. Sep 5 23:50:57.170536 systemd[1]: Started sshd@3-91.99.146.49:22-139.178.68.195:40508.service - OpenSSH per-connection server daemon (139.178.68.195:40508). Sep 5 23:50:58.169148 sshd[1619]: Accepted publickey for core from 139.178.68.195 port 40508 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:50:58.171557 sshd[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:50:58.178156 systemd-logind[1459]: New session 4 of user core. Sep 5 23:50:58.189556 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 5 23:50:58.858785 sshd[1619]: pam_unix(sshd:session): session closed for user core Sep 5 23:50:58.865043 systemd[1]: sshd@3-91.99.146.49:22-139.178.68.195:40508.service: Deactivated successfully. Sep 5 23:50:58.867667 systemd[1]: session-4.scope: Deactivated successfully. Sep 5 23:50:58.871839 systemd-logind[1459]: Session 4 logged out. Waiting for processes to exit. Sep 5 23:50:58.874157 systemd-logind[1459]: Removed session 4. Sep 5 23:50:59.039947 systemd[1]: Started sshd@4-91.99.146.49:22-139.178.68.195:40522.service - OpenSSH per-connection server daemon (139.178.68.195:40522). Sep 5 23:51:00.025615 sshd[1626]: Accepted publickey for core from 139.178.68.195 port 40522 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:51:00.027773 sshd[1626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:51:00.031767 systemd-logind[1459]: New session 5 of user core. Sep 5 23:51:00.036415 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 5 23:51:00.559839 sudo[1629]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 5 23:51:00.560130 sudo[1629]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 23:51:00.579155 sudo[1629]: pam_unix(sudo:session): session closed for user root Sep 5 23:51:00.740915 sshd[1626]: pam_unix(sshd:session): session closed for user core Sep 5 23:51:00.747484 systemd-logind[1459]: Session 5 logged out. Waiting for processes to exit. Sep 5 23:51:00.748729 systemd[1]: sshd@4-91.99.146.49:22-139.178.68.195:40522.service: Deactivated successfully. Sep 5 23:51:00.751471 systemd[1]: session-5.scope: Deactivated successfully. Sep 5 23:51:00.754377 systemd-logind[1459]: Removed session 5. Sep 5 23:51:00.917758 systemd[1]: Started sshd@5-91.99.146.49:22-139.178.68.195:52504.service - OpenSSH per-connection server daemon (139.178.68.195:52504). Sep 5 23:51:01.906402 sshd[1634]: Accepted publickey for core from 139.178.68.195 port 52504 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:51:01.909176 sshd[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:51:01.918507 systemd-logind[1459]: New session 6 of user core. Sep 5 23:51:01.921424 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 5 23:51:02.439508 sudo[1638]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 5 23:51:02.442029 sudo[1638]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 23:51:02.451403 sudo[1638]: pam_unix(sudo:session): session closed for user root Sep 5 23:51:02.459868 sudo[1637]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 5 23:51:02.460151 sudo[1637]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 23:51:02.488366 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 5 23:51:02.490719 auditctl[1641]: No rules Sep 5 23:51:02.491070 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 23:51:02.491298 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 5 23:51:02.495330 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 23:51:02.526760 augenrules[1659]: No rules Sep 5 23:51:02.528631 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 23:51:02.530820 sudo[1637]: pam_unix(sudo:session): session closed for user root Sep 5 23:51:02.693583 sshd[1634]: pam_unix(sshd:session): session closed for user core Sep 5 23:51:02.698601 systemd-logind[1459]: Session 6 logged out. Waiting for processes to exit. Sep 5 23:51:02.699677 systemd[1]: sshd@5-91.99.146.49:22-139.178.68.195:52504.service: Deactivated successfully. Sep 5 23:51:02.703045 systemd[1]: session-6.scope: Deactivated successfully. Sep 5 23:51:02.704125 systemd-logind[1459]: Removed session 6. Sep 5 23:51:02.875693 systemd[1]: Started sshd@6-91.99.146.49:22-139.178.68.195:52518.service - OpenSSH per-connection server daemon (139.178.68.195:52518). Sep 5 23:51:03.532526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 5 23:51:03.539608 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:51:03.653223 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:51:03.658726 (kubelet)[1676]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:51:03.712216 kubelet[1676]: E0905 23:51:03.712153 1676 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:51:03.716826 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:51:03.717026 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:51:03.862684 sshd[1667]: Accepted publickey for core from 139.178.68.195 port 52518 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:51:03.865250 sshd[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:51:03.872821 systemd-logind[1459]: New session 7 of user core. Sep 5 23:51:03.879664 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 5 23:51:04.391234 sudo[1685]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 5 23:51:04.391526 sudo[1685]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 23:51:04.688573 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 5 23:51:04.700771 (dockerd)[1700]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 5 23:51:04.961601 dockerd[1700]: time="2025-09-05T23:51:04.960811165Z" level=info msg="Starting up" Sep 5 23:51:05.047298 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1106770838-merged.mount: Deactivated successfully. Sep 5 23:51:05.068800 dockerd[1700]: time="2025-09-05T23:51:05.068753754Z" level=info msg="Loading containers: start." Sep 5 23:51:05.182943 kernel: Initializing XFRM netlink socket Sep 5 23:51:05.266945 systemd-networkd[1377]: docker0: Link UP Sep 5 23:51:05.285064 dockerd[1700]: time="2025-09-05T23:51:05.284974206Z" level=info msg="Loading containers: done." Sep 5 23:51:05.300673 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck491605639-merged.mount: Deactivated successfully. Sep 5 23:51:05.310322 dockerd[1700]: time="2025-09-05T23:51:05.310221208Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 5 23:51:05.310495 dockerd[1700]: time="2025-09-05T23:51:05.310385595Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 5 23:51:05.310593 dockerd[1700]: time="2025-09-05T23:51:05.310561299Z" level=info msg="Daemon has completed initialization" Sep 5 23:51:05.352662 dockerd[1700]: time="2025-09-05T23:51:05.352405590Z" level=info msg="API listen on /run/docker.sock" Sep 5 23:51:05.352888 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 5 23:51:06.477228 containerd[1481]: time="2025-09-05T23:51:06.477044121Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 5 23:51:07.121238 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount634260954.mount: Deactivated successfully. Sep 5 23:51:07.952217 containerd[1481]: time="2025-09-05T23:51:07.950555922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:07.952217 containerd[1481]: time="2025-09-05T23:51:07.952002631Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=25652533" Sep 5 23:51:07.953617 containerd[1481]: time="2025-09-05T23:51:07.953558907Z" level=info msg="ImageCreate event name:\"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:07.957426 containerd[1481]: time="2025-09-05T23:51:07.957368350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:07.959004 containerd[1481]: time="2025-09-05T23:51:07.958961481Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"25649241\" in 1.481869153s" Sep 5 23:51:07.959143 containerd[1481]: time="2025-09-05T23:51:07.959126012Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\"" Sep 5 23:51:07.960887 containerd[1481]: time="2025-09-05T23:51:07.960845981Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 5 23:51:09.071259 containerd[1481]: time="2025-09-05T23:51:09.071135306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:09.073117 containerd[1481]: time="2025-09-05T23:51:09.072779195Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=22460329" Sep 5 23:51:09.076218 containerd[1481]: time="2025-09-05T23:51:09.074086379Z" level=info msg="ImageCreate event name:\"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:09.078453 containerd[1481]: time="2025-09-05T23:51:09.078393170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:09.080072 containerd[1481]: time="2025-09-05T23:51:09.080021932Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"23997423\" in 1.119128041s" Sep 5 23:51:09.080072 containerd[1481]: time="2025-09-05T23:51:09.080065357Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\"" Sep 5 23:51:09.081078 containerd[1481]: time="2025-09-05T23:51:09.081050525Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 5 23:51:10.105215 containerd[1481]: time="2025-09-05T23:51:10.105114609Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:10.106969 containerd[1481]: time="2025-09-05T23:51:10.106876323Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=17125923" Sep 5 23:51:10.108212 containerd[1481]: time="2025-09-05T23:51:10.108134319Z" level=info msg="ImageCreate event name:\"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:10.111939 containerd[1481]: time="2025-09-05T23:51:10.111875848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:10.114428 containerd[1481]: time="2025-09-05T23:51:10.114256100Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"18663035\" in 1.033062602s" Sep 5 23:51:10.114428 containerd[1481]: time="2025-09-05T23:51:10.114314388Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\"" Sep 5 23:51:10.115004 containerd[1481]: time="2025-09-05T23:51:10.114951970Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 5 23:51:11.072300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3120721567.mount: Deactivated successfully. Sep 5 23:51:11.365139 containerd[1481]: time="2025-09-05T23:51:11.363684552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:11.366525 containerd[1481]: time="2025-09-05T23:51:11.366469296Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=26916121" Sep 5 23:51:11.368397 containerd[1481]: time="2025-09-05T23:51:11.368322398Z" level=info msg="ImageCreate event name:\"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:11.372893 containerd[1481]: time="2025-09-05T23:51:11.371306610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:11.372893 containerd[1481]: time="2025-09-05T23:51:11.372566903Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"26915114\" in 1.257449166s" Sep 5 23:51:11.372893 containerd[1481]: time="2025-09-05T23:51:11.372613545Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\"" Sep 5 23:51:11.373708 containerd[1481]: time="2025-09-05T23:51:11.373629886Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 5 23:51:12.065587 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1656270845.mount: Deactivated successfully. Sep 5 23:51:12.793297 containerd[1481]: time="2025-09-05T23:51:12.793236960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:12.795694 containerd[1481]: time="2025-09-05T23:51:12.795635092Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Sep 5 23:51:12.796438 containerd[1481]: time="2025-09-05T23:51:12.796402890Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:12.802848 containerd[1481]: time="2025-09-05T23:51:12.802779484Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:12.805452 containerd[1481]: time="2025-09-05T23:51:12.805393022Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.431682668s" Sep 5 23:51:12.805452 containerd[1481]: time="2025-09-05T23:51:12.805448061Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 5 23:51:12.809440 containerd[1481]: time="2025-09-05T23:51:12.809382028Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 5 23:51:13.402323 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2050903682.mount: Deactivated successfully. Sep 5 23:51:13.411525 containerd[1481]: time="2025-09-05T23:51:13.411472282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:13.412943 containerd[1481]: time="2025-09-05T23:51:13.412862104Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Sep 5 23:51:13.414343 containerd[1481]: time="2025-09-05T23:51:13.414287760Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:13.418149 containerd[1481]: time="2025-09-05T23:51:13.418089376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:13.419601 containerd[1481]: time="2025-09-05T23:51:13.419296312Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 609.857845ms" Sep 5 23:51:13.419601 containerd[1481]: time="2025-09-05T23:51:13.419341335Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 5 23:51:13.420769 containerd[1481]: time="2025-09-05T23:51:13.420743900Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 5 23:51:13.926245 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 5 23:51:13.945574 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:51:14.041002 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2688201090.mount: Deactivated successfully. Sep 5 23:51:14.092072 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:51:14.102774 (kubelet)[1981]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:51:14.151703 kubelet[1981]: E0905 23:51:14.151556 1981 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:51:14.154804 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:51:14.154937 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:51:15.497218 containerd[1481]: time="2025-09-05T23:51:15.495418996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:15.497218 containerd[1481]: time="2025-09-05T23:51:15.496938069Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537235" Sep 5 23:51:15.497218 containerd[1481]: time="2025-09-05T23:51:15.497142309Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:15.501020 containerd[1481]: time="2025-09-05T23:51:15.500967123Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:15.503224 containerd[1481]: time="2025-09-05T23:51:15.503160975Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.08237904s" Sep 5 23:51:15.503395 containerd[1481]: time="2025-09-05T23:51:15.503370050Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 5 23:51:21.147504 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:51:21.169044 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:51:21.201512 systemd[1]: Reloading requested from client PID 2063 ('systemctl') (unit session-7.scope)... Sep 5 23:51:21.201535 systemd[1]: Reloading... Sep 5 23:51:21.317318 zram_generator::config[2106]: No configuration found. Sep 5 23:51:21.416265 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:51:21.487580 systemd[1]: Reloading finished in 285 ms. Sep 5 23:51:21.558533 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:51:21.560676 (kubelet)[2144]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 23:51:21.562510 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:51:21.563099 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 23:51:21.563440 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:51:21.571830 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:51:21.688237 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:51:21.692868 (kubelet)[2154]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 23:51:21.739776 kubelet[2154]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:51:21.739776 kubelet[2154]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 5 23:51:21.739776 kubelet[2154]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:51:21.740497 kubelet[2154]: I0905 23:51:21.739927 2154 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 23:51:22.772406 kubelet[2154]: I0905 23:51:22.772326 2154 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 5 23:51:22.772406 kubelet[2154]: I0905 23:51:22.772397 2154 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 23:51:22.773039 kubelet[2154]: I0905 23:51:22.772841 2154 server.go:934] "Client rotation is on, will bootstrap in background" Sep 5 23:51:22.806885 kubelet[2154]: E0905 23:51:22.806839 2154 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://91.99.146.49:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 91.99.146.49:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:51:22.809481 kubelet[2154]: I0905 23:51:22.809288 2154 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 23:51:22.818252 kubelet[2154]: E0905 23:51:22.818204 2154 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 23:51:22.818252 kubelet[2154]: I0905 23:51:22.818240 2154 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 23:51:22.822425 kubelet[2154]: I0905 23:51:22.822400 2154 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 23:51:22.822726 kubelet[2154]: I0905 23:51:22.822715 2154 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 5 23:51:22.822874 kubelet[2154]: I0905 23:51:22.822846 2154 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 23:51:22.823104 kubelet[2154]: I0905 23:51:22.822876 2154 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-n-c970465010","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 23:51:22.823286 kubelet[2154]: I0905 23:51:22.823272 2154 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 23:51:22.823310 kubelet[2154]: I0905 23:51:22.823289 2154 container_manager_linux.go:300] "Creating device plugin manager" Sep 5 23:51:22.823516 kubelet[2154]: I0905 23:51:22.823501 2154 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:51:22.828244 kubelet[2154]: I0905 23:51:22.827485 2154 kubelet.go:408] "Attempting to sync node with API server" Sep 5 23:51:22.828244 kubelet[2154]: I0905 23:51:22.827528 2154 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 23:51:22.828244 kubelet[2154]: I0905 23:51:22.827557 2154 kubelet.go:314] "Adding apiserver pod source" Sep 5 23:51:22.828244 kubelet[2154]: I0905 23:51:22.827649 2154 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 23:51:22.831712 kubelet[2154]: W0905 23:51:22.831645 2154 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://91.99.146.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-c970465010&limit=500&resourceVersion=0": dial tcp 91.99.146.49:6443: connect: connection refused Sep 5 23:51:22.831833 kubelet[2154]: E0905 23:51:22.831721 2154 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://91.99.146.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-c970465010&limit=500&resourceVersion=0\": dial tcp 91.99.146.49:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:51:22.832849 kubelet[2154]: W0905 23:51:22.832809 2154 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://91.99.146.49:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 91.99.146.49:6443: connect: connection refused Sep 5 23:51:22.832988 kubelet[2154]: E0905 23:51:22.832967 2154 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://91.99.146.49:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 91.99.146.49:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:51:22.834014 kubelet[2154]: I0905 23:51:22.833976 2154 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 5 23:51:22.835679 kubelet[2154]: I0905 23:51:22.835646 2154 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 5 23:51:22.836033 kubelet[2154]: W0905 23:51:22.836009 2154 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 5 23:51:22.839766 kubelet[2154]: I0905 23:51:22.839455 2154 server.go:1274] "Started kubelet" Sep 5 23:51:22.845935 kubelet[2154]: E0905 23:51:22.844633 2154 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://91.99.146.49:6443/api/v1/namespaces/default/events\": dial tcp 91.99.146.49:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-5-n-c970465010.186287f31bb2d692 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-5-n-c970465010,UID:ci-4081-3-5-n-c970465010,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-n-c970465010,},FirstTimestamp:2025-09-05 23:51:22.83942261 +0000 UTC m=+1.143015627,LastTimestamp:2025-09-05 23:51:22.83942261 +0000 UTC m=+1.143015627,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-n-c970465010,}" Sep 5 23:51:22.846102 kubelet[2154]: I0905 23:51:22.846037 2154 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 23:51:22.846952 kubelet[2154]: I0905 23:51:22.846462 2154 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 23:51:22.846952 kubelet[2154]: I0905 23:51:22.846608 2154 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 23:51:22.848232 kubelet[2154]: I0905 23:51:22.848042 2154 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 23:51:22.848232 kubelet[2154]: I0905 23:51:22.848079 2154 server.go:449] "Adding debug handlers to kubelet server" Sep 5 23:51:22.851528 kubelet[2154]: I0905 23:51:22.851496 2154 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 23:51:22.857069 kubelet[2154]: I0905 23:51:22.856494 2154 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 5 23:51:22.857069 kubelet[2154]: I0905 23:51:22.856674 2154 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 5 23:51:22.857069 kubelet[2154]: I0905 23:51:22.856763 2154 reconciler.go:26] "Reconciler: start to sync state" Sep 5 23:51:22.859534 kubelet[2154]: E0905 23:51:22.859004 2154 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-c970465010\" not found" Sep 5 23:51:22.862854 kubelet[2154]: E0905 23:51:22.862796 2154 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.146.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-c970465010?timeout=10s\": dial tcp 91.99.146.49:6443: connect: connection refused" interval="200ms" Sep 5 23:51:22.862999 kubelet[2154]: W0905 23:51:22.862919 2154 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://91.99.146.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.99.146.49:6443: connect: connection refused Sep 5 23:51:22.862999 kubelet[2154]: E0905 23:51:22.862969 2154 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://91.99.146.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 91.99.146.49:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:51:22.863459 kubelet[2154]: I0905 23:51:22.863439 2154 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 23:51:22.865967 kubelet[2154]: E0905 23:51:22.865938 2154 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 23:51:22.867352 kubelet[2154]: I0905 23:51:22.866533 2154 factory.go:221] Registration of the containerd container factory successfully Sep 5 23:51:22.867352 kubelet[2154]: I0905 23:51:22.866555 2154 factory.go:221] Registration of the systemd container factory successfully Sep 5 23:51:22.883020 kubelet[2154]: I0905 23:51:22.882962 2154 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 5 23:51:22.884516 kubelet[2154]: I0905 23:51:22.884475 2154 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 5 23:51:22.884678 kubelet[2154]: I0905 23:51:22.884658 2154 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 5 23:51:22.884763 kubelet[2154]: I0905 23:51:22.884752 2154 kubelet.go:2321] "Starting kubelet main sync loop" Sep 5 23:51:22.884939 kubelet[2154]: E0905 23:51:22.884903 2154 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 23:51:22.891126 kubelet[2154]: W0905 23:51:22.891047 2154 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://91.99.146.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.99.146.49:6443: connect: connection refused Sep 5 23:51:22.891268 kubelet[2154]: E0905 23:51:22.891133 2154 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://91.99.146.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 91.99.146.49:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:51:22.896557 kubelet[2154]: I0905 23:51:22.896276 2154 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 5 23:51:22.896557 kubelet[2154]: I0905 23:51:22.896294 2154 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 5 23:51:22.896557 kubelet[2154]: I0905 23:51:22.896317 2154 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:51:22.899917 kubelet[2154]: I0905 23:51:22.899879 2154 policy_none.go:49] "None policy: Start" Sep 5 23:51:22.901642 kubelet[2154]: I0905 23:51:22.901205 2154 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 5 23:51:22.901642 kubelet[2154]: I0905 23:51:22.901247 2154 state_mem.go:35] "Initializing new in-memory state store" Sep 5 23:51:22.909234 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 5 23:51:22.922983 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 5 23:51:22.926676 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 5 23:51:22.940153 kubelet[2154]: I0905 23:51:22.939291 2154 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 5 23:51:22.940153 kubelet[2154]: I0905 23:51:22.939569 2154 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 23:51:22.940153 kubelet[2154]: I0905 23:51:22.939587 2154 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 23:51:22.942292 kubelet[2154]: I0905 23:51:22.942267 2154 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 23:51:22.943213 kubelet[2154]: E0905 23:51:22.943137 2154 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-5-n-c970465010\" not found" Sep 5 23:51:23.000064 systemd[1]: Created slice kubepods-burstable-pod853f5fc5f0ac2a688cacc3d70fe16159.slice - libcontainer container kubepods-burstable-pod853f5fc5f0ac2a688cacc3d70fe16159.slice. Sep 5 23:51:23.017719 systemd[1]: Created slice kubepods-burstable-pod4142eff100915284e007a74673f59820.slice - libcontainer container kubepods-burstable-pod4142eff100915284e007a74673f59820.slice. Sep 5 23:51:23.030705 systemd[1]: Created slice kubepods-burstable-pod90da2886ad4f24eaa88e50297da6a28a.slice - libcontainer container kubepods-burstable-pod90da2886ad4f24eaa88e50297da6a28a.slice. Sep 5 23:51:23.043558 kubelet[2154]: I0905 23:51:23.042960 2154 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-c970465010" Sep 5 23:51:23.043558 kubelet[2154]: E0905 23:51:23.043503 2154 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://91.99.146.49:6443/api/v1/nodes\": dial tcp 91.99.146.49:6443: connect: connection refused" node="ci-4081-3-5-n-c970465010" Sep 5 23:51:23.064270 kubelet[2154]: E0905 23:51:23.064215 2154 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.146.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-c970465010?timeout=10s\": dial tcp 91.99.146.49:6443: connect: connection refused" interval="400ms" Sep 5 23:51:23.158615 kubelet[2154]: I0905 23:51:23.158533 2154 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/853f5fc5f0ac2a688cacc3d70fe16159-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-n-c970465010\" (UID: \"853f5fc5f0ac2a688cacc3d70fe16159\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-c970465010" Sep 5 23:51:23.158615 kubelet[2154]: I0905 23:51:23.158612 2154 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4142eff100915284e007a74673f59820-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-n-c970465010\" (UID: \"4142eff100915284e007a74673f59820\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-c970465010" Sep 5 23:51:23.158869 kubelet[2154]: I0905 23:51:23.158660 2154 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4142eff100915284e007a74673f59820-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-n-c970465010\" (UID: \"4142eff100915284e007a74673f59820\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-c970465010" Sep 5 23:51:23.158869 kubelet[2154]: I0905 23:51:23.158696 2154 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4142eff100915284e007a74673f59820-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-n-c970465010\" (UID: \"4142eff100915284e007a74673f59820\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-c970465010" Sep 5 23:51:23.158869 kubelet[2154]: I0905 23:51:23.158733 2154 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/90da2886ad4f24eaa88e50297da6a28a-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-n-c970465010\" (UID: \"90da2886ad4f24eaa88e50297da6a28a\") " pod="kube-system/kube-scheduler-ci-4081-3-5-n-c970465010" Sep 5 23:51:23.158869 kubelet[2154]: I0905 23:51:23.158788 2154 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/853f5fc5f0ac2a688cacc3d70fe16159-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-n-c970465010\" (UID: \"853f5fc5f0ac2a688cacc3d70fe16159\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-c970465010" Sep 5 23:51:23.159066 kubelet[2154]: I0905 23:51:23.158876 2154 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/853f5fc5f0ac2a688cacc3d70fe16159-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-n-c970465010\" (UID: \"853f5fc5f0ac2a688cacc3d70fe16159\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-c970465010" Sep 5 23:51:23.159066 kubelet[2154]: I0905 23:51:23.158926 2154 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4142eff100915284e007a74673f59820-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-c970465010\" (UID: \"4142eff100915284e007a74673f59820\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-c970465010" Sep 5 23:51:23.159066 kubelet[2154]: I0905 23:51:23.158961 2154 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4142eff100915284e007a74673f59820-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-c970465010\" (UID: \"4142eff100915284e007a74673f59820\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-c970465010" Sep 5 23:51:23.246797 kubelet[2154]: I0905 23:51:23.246669 2154 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-c970465010" Sep 5 23:51:23.247245 kubelet[2154]: E0905 23:51:23.247210 2154 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://91.99.146.49:6443/api/v1/nodes\": dial tcp 91.99.146.49:6443: connect: connection refused" node="ci-4081-3-5-n-c970465010" Sep 5 23:51:23.316547 containerd[1481]: time="2025-09-05T23:51:23.316122315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-n-c970465010,Uid:853f5fc5f0ac2a688cacc3d70fe16159,Namespace:kube-system,Attempt:0,}" Sep 5 23:51:23.328505 containerd[1481]: time="2025-09-05T23:51:23.328452284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-n-c970465010,Uid:4142eff100915284e007a74673f59820,Namespace:kube-system,Attempt:0,}" Sep 5 23:51:23.338103 containerd[1481]: time="2025-09-05T23:51:23.338034950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-n-c970465010,Uid:90da2886ad4f24eaa88e50297da6a28a,Namespace:kube-system,Attempt:0,}" Sep 5 23:51:23.466058 kubelet[2154]: E0905 23:51:23.465973 2154 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.146.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-c970465010?timeout=10s\": dial tcp 91.99.146.49:6443: connect: connection refused" interval="800ms" Sep 5 23:51:23.649955 kubelet[2154]: I0905 23:51:23.649763 2154 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-c970465010" Sep 5 23:51:23.650372 kubelet[2154]: E0905 23:51:23.650312 2154 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://91.99.146.49:6443/api/v1/nodes\": dial tcp 91.99.146.49:6443: connect: connection refused" node="ci-4081-3-5-n-c970465010" Sep 5 23:51:23.853620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2925949081.mount: Deactivated successfully. Sep 5 23:51:23.861295 containerd[1481]: time="2025-09-05T23:51:23.861224839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:51:23.862312 containerd[1481]: time="2025-09-05T23:51:23.862228673Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Sep 5 23:51:23.863573 containerd[1481]: time="2025-09-05T23:51:23.863535626Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:51:23.865514 containerd[1481]: time="2025-09-05T23:51:23.865272936Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 23:51:23.865654 containerd[1481]: time="2025-09-05T23:51:23.865587774Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:51:23.867008 containerd[1481]: time="2025-09-05T23:51:23.866949446Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:51:23.868225 containerd[1481]: time="2025-09-05T23:51:23.868128159Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 23:51:23.869956 containerd[1481]: time="2025-09-05T23:51:23.869925149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:51:23.872625 containerd[1481]: time="2025-09-05T23:51:23.872598694Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 556.36282ms" Sep 5 23:51:23.873003 containerd[1481]: time="2025-09-05T23:51:23.872979292Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 544.422168ms" Sep 5 23:51:23.874404 containerd[1481]: time="2025-09-05T23:51:23.874378004Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 536.233655ms" Sep 5 23:51:24.018616 containerd[1481]: time="2025-09-05T23:51:24.016381397Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:51:24.018616 containerd[1481]: time="2025-09-05T23:51:24.016445797Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:51:24.018616 containerd[1481]: time="2025-09-05T23:51:24.016461917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:24.018616 containerd[1481]: time="2025-09-05T23:51:24.016542676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:24.018616 containerd[1481]: time="2025-09-05T23:51:24.018358466Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:51:24.018616 containerd[1481]: time="2025-09-05T23:51:24.018417626Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:51:24.018616 containerd[1481]: time="2025-09-05T23:51:24.018431986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:24.018616 containerd[1481]: time="2025-09-05T23:51:24.018506745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:24.025084 containerd[1481]: time="2025-09-05T23:51:24.024975671Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:51:24.025835 containerd[1481]: time="2025-09-05T23:51:24.025655507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:51:24.025957 containerd[1481]: time="2025-09-05T23:51:24.025814666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:24.026252 containerd[1481]: time="2025-09-05T23:51:24.026164344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:24.051443 systemd[1]: Started cri-containerd-bb5b162f858f450792ae88e016f5234ae98471fcab2e11294c0145741b242a6e.scope - libcontainer container bb5b162f858f450792ae88e016f5234ae98471fcab2e11294c0145741b242a6e. Sep 5 23:51:24.058159 systemd[1]: Started cri-containerd-47fe6e5fa3a4d6e7fd127d53ed02eea47ab36a6b44af1f054471e2d731b121f7.scope - libcontainer container 47fe6e5fa3a4d6e7fd127d53ed02eea47ab36a6b44af1f054471e2d731b121f7. Sep 5 23:51:24.060417 systemd[1]: Started cri-containerd-edc482bc996ba98821af19d571ce546f3d7bca448803e9cbabf555483fdde929.scope - libcontainer container edc482bc996ba98821af19d571ce546f3d7bca448803e9cbabf555483fdde929. Sep 5 23:51:24.127948 containerd[1481]: time="2025-09-05T23:51:24.127753316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-n-c970465010,Uid:853f5fc5f0ac2a688cacc3d70fe16159,Namespace:kube-system,Attempt:0,} returns sandbox id \"47fe6e5fa3a4d6e7fd127d53ed02eea47ab36a6b44af1f054471e2d731b121f7\"" Sep 5 23:51:24.128386 containerd[1481]: time="2025-09-05T23:51:24.128258153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-n-c970465010,Uid:90da2886ad4f24eaa88e50297da6a28a,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb5b162f858f450792ae88e016f5234ae98471fcab2e11294c0145741b242a6e\"" Sep 5 23:51:24.136668 containerd[1481]: time="2025-09-05T23:51:24.136626268Z" level=info msg="CreateContainer within sandbox \"47fe6e5fa3a4d6e7fd127d53ed02eea47ab36a6b44af1f054471e2d731b121f7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 5 23:51:24.137551 containerd[1481]: time="2025-09-05T23:51:24.137399384Z" level=info msg="CreateContainer within sandbox \"bb5b162f858f450792ae88e016f5234ae98471fcab2e11294c0145741b242a6e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 5 23:51:24.138749 containerd[1481]: time="2025-09-05T23:51:24.138470578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-n-c970465010,Uid:4142eff100915284e007a74673f59820,Namespace:kube-system,Attempt:0,} returns sandbox id \"edc482bc996ba98821af19d571ce546f3d7bca448803e9cbabf555483fdde929\"" Sep 5 23:51:24.142916 containerd[1481]: time="2025-09-05T23:51:24.142862434Z" level=info msg="CreateContainer within sandbox \"edc482bc996ba98821af19d571ce546f3d7bca448803e9cbabf555483fdde929\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 5 23:51:24.161165 containerd[1481]: time="2025-09-05T23:51:24.161114816Z" level=info msg="CreateContainer within sandbox \"bb5b162f858f450792ae88e016f5234ae98471fcab2e11294c0145741b242a6e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"681b2c9739729385921b4b3b20e6f9919a7eaa1e321124d9734d2395b9a27b82\"" Sep 5 23:51:24.162333 containerd[1481]: time="2025-09-05T23:51:24.162301809Z" level=info msg="StartContainer for \"681b2c9739729385921b4b3b20e6f9919a7eaa1e321124d9734d2395b9a27b82\"" Sep 5 23:51:24.165581 containerd[1481]: time="2025-09-05T23:51:24.165526952Z" level=info msg="CreateContainer within sandbox \"47fe6e5fa3a4d6e7fd127d53ed02eea47ab36a6b44af1f054471e2d731b121f7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0ead81039b3ef59ebb8f3d4aa13b31dd7cbac4c31fb056ff25476f8fc50a2852\"" Sep 5 23:51:24.166283 containerd[1481]: time="2025-09-05T23:51:24.166245908Z" level=info msg="StartContainer for \"0ead81039b3ef59ebb8f3d4aa13b31dd7cbac4c31fb056ff25476f8fc50a2852\"" Sep 5 23:51:24.169886 containerd[1481]: time="2025-09-05T23:51:24.169714129Z" level=info msg="CreateContainer within sandbox \"edc482bc996ba98821af19d571ce546f3d7bca448803e9cbabf555483fdde929\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6aa6fdc0c1ae9133fae8df42f43ab36f0f619b10aea6abc007bbf21ba96121b5\"" Sep 5 23:51:24.170386 containerd[1481]: time="2025-09-05T23:51:24.170303806Z" level=info msg="StartContainer for \"6aa6fdc0c1ae9133fae8df42f43ab36f0f619b10aea6abc007bbf21ba96121b5\"" Sep 5 23:51:24.200626 systemd[1]: Started cri-containerd-681b2c9739729385921b4b3b20e6f9919a7eaa1e321124d9734d2395b9a27b82.scope - libcontainer container 681b2c9739729385921b4b3b20e6f9919a7eaa1e321124d9734d2395b9a27b82. Sep 5 23:51:24.209417 systemd[1]: Started cri-containerd-0ead81039b3ef59ebb8f3d4aa13b31dd7cbac4c31fb056ff25476f8fc50a2852.scope - libcontainer container 0ead81039b3ef59ebb8f3d4aa13b31dd7cbac4c31fb056ff25476f8fc50a2852. Sep 5 23:51:24.231911 systemd[1]: Started cri-containerd-6aa6fdc0c1ae9133fae8df42f43ab36f0f619b10aea6abc007bbf21ba96121b5.scope - libcontainer container 6aa6fdc0c1ae9133fae8df42f43ab36f0f619b10aea6abc007bbf21ba96121b5. Sep 5 23:51:24.259845 containerd[1481]: time="2025-09-05T23:51:24.259646644Z" level=info msg="StartContainer for \"681b2c9739729385921b4b3b20e6f9919a7eaa1e321124d9734d2395b9a27b82\" returns successfully" Sep 5 23:51:24.267177 kubelet[2154]: E0905 23:51:24.266857 2154 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.146.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-c970465010?timeout=10s\": dial tcp 91.99.146.49:6443: connect: connection refused" interval="1.6s" Sep 5 23:51:24.291258 containerd[1481]: time="2025-09-05T23:51:24.289729201Z" level=info msg="StartContainer for \"0ead81039b3ef59ebb8f3d4aa13b31dd7cbac4c31fb056ff25476f8fc50a2852\" returns successfully" Sep 5 23:51:24.304637 containerd[1481]: time="2025-09-05T23:51:24.304569801Z" level=info msg="StartContainer for \"6aa6fdc0c1ae9133fae8df42f43ab36f0f619b10aea6abc007bbf21ba96121b5\" returns successfully" Sep 5 23:51:24.410218 kubelet[2154]: W0905 23:51:24.409991 2154 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://91.99.146.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-c970465010&limit=500&resourceVersion=0": dial tcp 91.99.146.49:6443: connect: connection refused Sep 5 23:51:24.410218 kubelet[2154]: E0905 23:51:24.410067 2154 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://91.99.146.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-c970465010&limit=500&resourceVersion=0\": dial tcp 91.99.146.49:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:51:24.420967 kubelet[2154]: W0905 23:51:24.420840 2154 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://91.99.146.49:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 91.99.146.49:6443: connect: connection refused Sep 5 23:51:24.420967 kubelet[2154]: E0905 23:51:24.420926 2154 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://91.99.146.49:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 91.99.146.49:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:51:24.452873 kubelet[2154]: I0905 23:51:24.452840 2154 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-c970465010" Sep 5 23:51:27.174113 kubelet[2154]: E0905 23:51:27.174060 2154 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-5-n-c970465010\" not found" node="ci-4081-3-5-n-c970465010" Sep 5 23:51:27.211825 kubelet[2154]: I0905 23:51:27.211606 2154 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-5-n-c970465010" Sep 5 23:51:27.211825 kubelet[2154]: E0905 23:51:27.211648 2154 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081-3-5-n-c970465010\": node \"ci-4081-3-5-n-c970465010\" not found" Sep 5 23:51:27.256550 kubelet[2154]: E0905 23:51:27.256507 2154 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-c970465010\" not found" Sep 5 23:51:27.834024 kubelet[2154]: I0905 23:51:27.833950 2154 apiserver.go:52] "Watching apiserver" Sep 5 23:51:27.856910 kubelet[2154]: I0905 23:51:27.856842 2154 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 5 23:51:29.160338 systemd[1]: Reloading requested from client PID 2430 ('systemctl') (unit session-7.scope)... Sep 5 23:51:29.160355 systemd[1]: Reloading... Sep 5 23:51:29.265216 zram_generator::config[2466]: No configuration found. Sep 5 23:51:29.381375 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:51:29.471862 systemd[1]: Reloading finished in 311 ms. Sep 5 23:51:29.507401 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:51:29.529979 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 23:51:29.531282 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:51:29.531389 systemd[1]: kubelet.service: Consumed 1.573s CPU time, 128.6M memory peak, 0B memory swap peak. Sep 5 23:51:29.537749 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:51:29.651179 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:51:29.665617 (kubelet)[2515]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 23:51:29.720627 kubelet[2515]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:51:29.720627 kubelet[2515]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 5 23:51:29.720627 kubelet[2515]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:51:29.721150 kubelet[2515]: I0905 23:51:29.720713 2515 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 23:51:29.730707 kubelet[2515]: I0905 23:51:29.730599 2515 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 5 23:51:29.730707 kubelet[2515]: I0905 23:51:29.730630 2515 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 23:51:29.731367 kubelet[2515]: I0905 23:51:29.731345 2515 server.go:934] "Client rotation is on, will bootstrap in background" Sep 5 23:51:29.733034 kubelet[2515]: I0905 23:51:29.733009 2515 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 5 23:51:29.735550 kubelet[2515]: I0905 23:51:29.735205 2515 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 23:51:29.738656 kubelet[2515]: E0905 23:51:29.738618 2515 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 23:51:29.738656 kubelet[2515]: I0905 23:51:29.738649 2515 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 23:51:29.741261 kubelet[2515]: I0905 23:51:29.740832 2515 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 23:51:29.741261 kubelet[2515]: I0905 23:51:29.740974 2515 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 5 23:51:29.741261 kubelet[2515]: I0905 23:51:29.741070 2515 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 23:51:29.741405 kubelet[2515]: I0905 23:51:29.741100 2515 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-n-c970465010","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 23:51:29.741405 kubelet[2515]: I0905 23:51:29.741334 2515 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 23:51:29.741405 kubelet[2515]: I0905 23:51:29.741344 2515 container_manager_linux.go:300] "Creating device plugin manager" Sep 5 23:51:29.741405 kubelet[2515]: I0905 23:51:29.741381 2515 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:51:29.741551 kubelet[2515]: I0905 23:51:29.741509 2515 kubelet.go:408] "Attempting to sync node with API server" Sep 5 23:51:29.741551 kubelet[2515]: I0905 23:51:29.741523 2515 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 23:51:29.741551 kubelet[2515]: I0905 23:51:29.741544 2515 kubelet.go:314] "Adding apiserver pod source" Sep 5 23:51:29.741625 kubelet[2515]: I0905 23:51:29.741559 2515 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 23:51:29.743947 kubelet[2515]: I0905 23:51:29.742918 2515 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 5 23:51:29.746040 kubelet[2515]: I0905 23:51:29.746012 2515 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 5 23:51:29.753209 kubelet[2515]: I0905 23:51:29.752911 2515 server.go:1274] "Started kubelet" Sep 5 23:51:29.755703 kubelet[2515]: I0905 23:51:29.755676 2515 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 23:51:29.773442 kubelet[2515]: I0905 23:51:29.772989 2515 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 23:51:29.774358 kubelet[2515]: I0905 23:51:29.774339 2515 server.go:449] "Adding debug handlers to kubelet server" Sep 5 23:51:29.775837 kubelet[2515]: I0905 23:51:29.775480 2515 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 23:51:29.775837 kubelet[2515]: I0905 23:51:29.775675 2515 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 23:51:29.776103 kubelet[2515]: I0905 23:51:29.776086 2515 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 23:51:29.782144 kubelet[2515]: I0905 23:51:29.782122 2515 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 5 23:51:29.782618 kubelet[2515]: E0905 23:51:29.782582 2515 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-c970465010\" not found" Sep 5 23:51:29.785228 kubelet[2515]: I0905 23:51:29.784928 2515 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 5 23:51:29.785228 kubelet[2515]: I0905 23:51:29.785083 2515 reconciler.go:26] "Reconciler: start to sync state" Sep 5 23:51:29.788421 kubelet[2515]: I0905 23:51:29.788284 2515 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 5 23:51:29.790238 kubelet[2515]: I0905 23:51:29.789897 2515 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 5 23:51:29.790238 kubelet[2515]: I0905 23:51:29.789921 2515 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 5 23:51:29.790238 kubelet[2515]: I0905 23:51:29.789939 2515 kubelet.go:2321] "Starting kubelet main sync loop" Sep 5 23:51:29.790238 kubelet[2515]: E0905 23:51:29.789979 2515 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 23:51:29.799286 kubelet[2515]: I0905 23:51:29.799252 2515 factory.go:221] Registration of the systemd container factory successfully Sep 5 23:51:29.799590 kubelet[2515]: I0905 23:51:29.799370 2515 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 23:51:29.803639 kubelet[2515]: E0905 23:51:29.803440 2515 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 23:51:29.803781 kubelet[2515]: I0905 23:51:29.803756 2515 factory.go:221] Registration of the containerd container factory successfully Sep 5 23:51:29.845226 kubelet[2515]: I0905 23:51:29.845127 2515 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 5 23:51:29.845226 kubelet[2515]: I0905 23:51:29.845145 2515 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 5 23:51:29.845226 kubelet[2515]: I0905 23:51:29.845165 2515 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:51:29.845636 kubelet[2515]: I0905 23:51:29.845536 2515 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 5 23:51:29.845636 kubelet[2515]: I0905 23:51:29.845549 2515 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 5 23:51:29.845636 kubelet[2515]: I0905 23:51:29.845568 2515 policy_none.go:49] "None policy: Start" Sep 5 23:51:29.847004 kubelet[2515]: I0905 23:51:29.846242 2515 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 5 23:51:29.847004 kubelet[2515]: I0905 23:51:29.846263 2515 state_mem.go:35] "Initializing new in-memory state store" Sep 5 23:51:29.847004 kubelet[2515]: I0905 23:51:29.846417 2515 state_mem.go:75] "Updated machine memory state" Sep 5 23:51:29.850867 kubelet[2515]: I0905 23:51:29.850829 2515 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 5 23:51:29.851520 kubelet[2515]: I0905 23:51:29.851501 2515 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 23:51:29.852572 kubelet[2515]: I0905 23:51:29.852019 2515 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 23:51:29.853169 kubelet[2515]: I0905 23:51:29.853128 2515 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 23:51:29.957316 kubelet[2515]: I0905 23:51:29.957264 2515 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-c970465010" Sep 5 23:51:29.968116 kubelet[2515]: I0905 23:51:29.967340 2515 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-3-5-n-c970465010" Sep 5 23:51:29.968116 kubelet[2515]: I0905 23:51:29.967455 2515 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-5-n-c970465010" Sep 5 23:51:29.986773 kubelet[2515]: I0905 23:51:29.986303 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/853f5fc5f0ac2a688cacc3d70fe16159-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-n-c970465010\" (UID: \"853f5fc5f0ac2a688cacc3d70fe16159\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-c970465010" Sep 5 23:51:29.986773 kubelet[2515]: I0905 23:51:29.986359 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/853f5fc5f0ac2a688cacc3d70fe16159-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-n-c970465010\" (UID: \"853f5fc5f0ac2a688cacc3d70fe16159\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-c970465010" Sep 5 23:51:29.986773 kubelet[2515]: I0905 23:51:29.986391 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4142eff100915284e007a74673f59820-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-c970465010\" (UID: \"4142eff100915284e007a74673f59820\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-c970465010" Sep 5 23:51:29.986773 kubelet[2515]: I0905 23:51:29.986422 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4142eff100915284e007a74673f59820-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-n-c970465010\" (UID: \"4142eff100915284e007a74673f59820\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-c970465010" Sep 5 23:51:29.986773 kubelet[2515]: I0905 23:51:29.986448 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/853f5fc5f0ac2a688cacc3d70fe16159-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-n-c970465010\" (UID: \"853f5fc5f0ac2a688cacc3d70fe16159\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-c970465010" Sep 5 23:51:29.987094 kubelet[2515]: I0905 23:51:29.986477 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4142eff100915284e007a74673f59820-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-n-c970465010\" (UID: \"4142eff100915284e007a74673f59820\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-c970465010" Sep 5 23:51:29.987094 kubelet[2515]: I0905 23:51:29.986504 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4142eff100915284e007a74673f59820-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-c970465010\" (UID: \"4142eff100915284e007a74673f59820\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-c970465010" Sep 5 23:51:29.987094 kubelet[2515]: I0905 23:51:29.986530 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4142eff100915284e007a74673f59820-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-n-c970465010\" (UID: \"4142eff100915284e007a74673f59820\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-c970465010" Sep 5 23:51:29.987094 kubelet[2515]: I0905 23:51:29.986556 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/90da2886ad4f24eaa88e50297da6a28a-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-n-c970465010\" (UID: \"90da2886ad4f24eaa88e50297da6a28a\") " pod="kube-system/kube-scheduler-ci-4081-3-5-n-c970465010" Sep 5 23:51:30.152035 sudo[2546]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 5 23:51:30.152353 sudo[2546]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 5 23:51:30.605879 sudo[2546]: pam_unix(sudo:session): session closed for user root Sep 5 23:51:30.742318 kubelet[2515]: I0905 23:51:30.742269 2515 apiserver.go:52] "Watching apiserver" Sep 5 23:51:30.785255 kubelet[2515]: I0905 23:51:30.785217 2515 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 5 23:51:30.786991 kubelet[2515]: I0905 23:51:30.786753 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-5-n-c970465010" podStartSLOduration=1.786614038 podStartE2EDuration="1.786614038s" podCreationTimestamp="2025-09-05 23:51:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:51:30.7757508 +0000 UTC m=+1.106102169" watchObservedRunningTime="2025-09-05 23:51:30.786614038 +0000 UTC m=+1.116965407" Sep 5 23:51:30.787195 kubelet[2515]: I0905 23:51:30.787038 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-5-n-c970465010" podStartSLOduration=1.7870278370000001 podStartE2EDuration="1.787027837s" podCreationTimestamp="2025-09-05 23:51:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:51:30.785647362 +0000 UTC m=+1.115998731" watchObservedRunningTime="2025-09-05 23:51:30.787027837 +0000 UTC m=+1.117379206" Sep 5 23:51:30.840448 kubelet[2515]: E0905 23:51:30.840397 2515 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081-3-5-n-c970465010\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-5-n-c970465010" Sep 5 23:51:30.841534 kubelet[2515]: E0905 23:51:30.841509 2515 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-5-n-c970465010\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-5-n-c970465010" Sep 5 23:51:30.851513 kubelet[2515]: I0905 23:51:30.850088 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-5-n-c970465010" podStartSLOduration=1.850070073 podStartE2EDuration="1.850070073s" podCreationTimestamp="2025-09-05 23:51:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:51:30.798031154 +0000 UTC m=+1.128382523" watchObservedRunningTime="2025-09-05 23:51:30.850070073 +0000 UTC m=+1.180421442" Sep 5 23:51:32.351620 sudo[1685]: pam_unix(sudo:session): session closed for user root Sep 5 23:51:32.513591 sshd[1667]: pam_unix(sshd:session): session closed for user core Sep 5 23:51:32.518589 systemd[1]: sshd@6-91.99.146.49:22-139.178.68.195:52518.service: Deactivated successfully. Sep 5 23:51:32.521151 systemd[1]: session-7.scope: Deactivated successfully. Sep 5 23:51:32.522484 systemd[1]: session-7.scope: Consumed 7.485s CPU time, 153.5M memory peak, 0B memory swap peak. Sep 5 23:51:32.523338 systemd-logind[1459]: Session 7 logged out. Waiting for processes to exit. Sep 5 23:51:32.524564 systemd-logind[1459]: Removed session 7. Sep 5 23:51:33.739258 kubelet[2515]: I0905 23:51:33.738875 2515 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 5 23:51:33.739611 containerd[1481]: time="2025-09-05T23:51:33.739167708Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 5 23:51:33.740241 kubelet[2515]: I0905 23:51:33.740197 2515 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 5 23:51:34.508927 systemd[1]: Created slice kubepods-besteffort-pode40befd2_b77d_460e_85bf_733fc8d89a02.slice - libcontainer container kubepods-besteffort-pode40befd2_b77d_460e_85bf_733fc8d89a02.slice. Sep 5 23:51:34.513412 kubelet[2515]: I0905 23:51:34.512481 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e40befd2-b77d-460e-85bf-733fc8d89a02-kube-proxy\") pod \"kube-proxy-rm6gj\" (UID: \"e40befd2-b77d-460e-85bf-733fc8d89a02\") " pod="kube-system/kube-proxy-rm6gj" Sep 5 23:51:34.513412 kubelet[2515]: I0905 23:51:34.512513 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e40befd2-b77d-460e-85bf-733fc8d89a02-xtables-lock\") pod \"kube-proxy-rm6gj\" (UID: \"e40befd2-b77d-460e-85bf-733fc8d89a02\") " pod="kube-system/kube-proxy-rm6gj" Sep 5 23:51:34.513412 kubelet[2515]: I0905 23:51:34.512545 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e40befd2-b77d-460e-85bf-733fc8d89a02-lib-modules\") pod \"kube-proxy-rm6gj\" (UID: \"e40befd2-b77d-460e-85bf-733fc8d89a02\") " pod="kube-system/kube-proxy-rm6gj" Sep 5 23:51:34.513412 kubelet[2515]: I0905 23:51:34.512562 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d77rx\" (UniqueName: \"kubernetes.io/projected/e40befd2-b77d-460e-85bf-733fc8d89a02-kube-api-access-d77rx\") pod \"kube-proxy-rm6gj\" (UID: \"e40befd2-b77d-460e-85bf-733fc8d89a02\") " pod="kube-system/kube-proxy-rm6gj" Sep 5 23:51:34.540308 systemd[1]: Created slice kubepods-burstable-pod6f8b4002_a581_472e_bded_1c11930cf33b.slice - libcontainer container kubepods-burstable-pod6f8b4002_a581_472e_bded_1c11930cf33b.slice. Sep 5 23:51:34.617255 kubelet[2515]: I0905 23:51:34.615364 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6f8b4002-a581-472e-bded-1c11930cf33b-hostproc\") pod \"cilium-kzrc7\" (UID: \"6f8b4002-a581-472e-bded-1c11930cf33b\") " pod="kube-system/cilium-kzrc7" Sep 5 23:51:34.617255 kubelet[2515]: I0905 23:51:34.615408 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f8b4002-a581-472e-bded-1c11930cf33b-lib-modules\") pod \"cilium-kzrc7\" (UID: \"6f8b4002-a581-472e-bded-1c11930cf33b\") " pod="kube-system/cilium-kzrc7" Sep 5 23:51:34.617255 kubelet[2515]: I0905 23:51:34.615440 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6f8b4002-a581-472e-bded-1c11930cf33b-cilium-run\") pod \"cilium-kzrc7\" (UID: \"6f8b4002-a581-472e-bded-1c11930cf33b\") " pod="kube-system/cilium-kzrc7" Sep 5 23:51:34.617255 kubelet[2515]: I0905 23:51:34.615462 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6f8b4002-a581-472e-bded-1c11930cf33b-cilium-cgroup\") pod \"cilium-kzrc7\" (UID: \"6f8b4002-a581-472e-bded-1c11930cf33b\") " pod="kube-system/cilium-kzrc7" Sep 5 23:51:34.617255 kubelet[2515]: I0905 23:51:34.615478 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6f8b4002-a581-472e-bded-1c11930cf33b-bpf-maps\") pod \"cilium-kzrc7\" (UID: \"6f8b4002-a581-472e-bded-1c11930cf33b\") " pod="kube-system/cilium-kzrc7" Sep 5 23:51:34.617255 kubelet[2515]: I0905 23:51:34.615500 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6f8b4002-a581-472e-bded-1c11930cf33b-clustermesh-secrets\") pod \"cilium-kzrc7\" (UID: \"6f8b4002-a581-472e-bded-1c11930cf33b\") " pod="kube-system/cilium-kzrc7" Sep 5 23:51:34.617511 kubelet[2515]: I0905 23:51:34.615516 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f8b4002-a581-472e-bded-1c11930cf33b-cilium-config-path\") pod \"cilium-kzrc7\" (UID: \"6f8b4002-a581-472e-bded-1c11930cf33b\") " pod="kube-system/cilium-kzrc7" Sep 5 23:51:34.617511 kubelet[2515]: I0905 23:51:34.615531 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f8b4002-a581-472e-bded-1c11930cf33b-etc-cni-netd\") pod \"cilium-kzrc7\" (UID: \"6f8b4002-a581-472e-bded-1c11930cf33b\") " pod="kube-system/cilium-kzrc7" Sep 5 23:51:34.617511 kubelet[2515]: I0905 23:51:34.615545 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6f8b4002-a581-472e-bded-1c11930cf33b-host-proc-sys-net\") pod \"cilium-kzrc7\" (UID: \"6f8b4002-a581-472e-bded-1c11930cf33b\") " pod="kube-system/cilium-kzrc7" Sep 5 23:51:34.617511 kubelet[2515]: I0905 23:51:34.615559 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jdg2\" (UniqueName: \"kubernetes.io/projected/6f8b4002-a581-472e-bded-1c11930cf33b-kube-api-access-8jdg2\") pod \"cilium-kzrc7\" (UID: \"6f8b4002-a581-472e-bded-1c11930cf33b\") " pod="kube-system/cilium-kzrc7" Sep 5 23:51:34.617511 kubelet[2515]: I0905 23:51:34.615585 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6f8b4002-a581-472e-bded-1c11930cf33b-cni-path\") pod \"cilium-kzrc7\" (UID: \"6f8b4002-a581-472e-bded-1c11930cf33b\") " pod="kube-system/cilium-kzrc7" Sep 5 23:51:34.617511 kubelet[2515]: I0905 23:51:34.615609 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f8b4002-a581-472e-bded-1c11930cf33b-xtables-lock\") pod \"cilium-kzrc7\" (UID: \"6f8b4002-a581-472e-bded-1c11930cf33b\") " pod="kube-system/cilium-kzrc7" Sep 5 23:51:34.617678 kubelet[2515]: I0905 23:51:34.615629 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6f8b4002-a581-472e-bded-1c11930cf33b-hubble-tls\") pod \"cilium-kzrc7\" (UID: \"6f8b4002-a581-472e-bded-1c11930cf33b\") " pod="kube-system/cilium-kzrc7" Sep 5 23:51:34.617678 kubelet[2515]: I0905 23:51:34.615646 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6f8b4002-a581-472e-bded-1c11930cf33b-host-proc-sys-kernel\") pod \"cilium-kzrc7\" (UID: \"6f8b4002-a581-472e-bded-1c11930cf33b\") " pod="kube-system/cilium-kzrc7" Sep 5 23:51:34.823832 containerd[1481]: time="2025-09-05T23:51:34.822599830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rm6gj,Uid:e40befd2-b77d-460e-85bf-733fc8d89a02,Namespace:kube-system,Attempt:0,}" Sep 5 23:51:34.823758 systemd[1]: Created slice kubepods-besteffort-podba20845f_1330_4e52_86e4_e4a7212b3432.slice - libcontainer container kubepods-besteffort-podba20845f_1330_4e52_86e4_e4a7212b3432.slice. Sep 5 23:51:34.847061 containerd[1481]: time="2025-09-05T23:51:34.846680714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kzrc7,Uid:6f8b4002-a581-472e-bded-1c11930cf33b,Namespace:kube-system,Attempt:0,}" Sep 5 23:51:34.864979 containerd[1481]: time="2025-09-05T23:51:34.863601021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:51:34.864979 containerd[1481]: time="2025-09-05T23:51:34.863760021Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:51:34.864979 containerd[1481]: time="2025-09-05T23:51:34.863893500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:34.864979 containerd[1481]: time="2025-09-05T23:51:34.864216779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:34.873323 containerd[1481]: time="2025-09-05T23:51:34.873122951Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:51:34.874056 containerd[1481]: time="2025-09-05T23:51:34.873997069Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:51:34.874468 containerd[1481]: time="2025-09-05T23:51:34.874381547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:34.874928 containerd[1481]: time="2025-09-05T23:51:34.874844586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:34.887902 systemd[1]: Started cri-containerd-ba629d36f732e713cace7d78a38395a8d1fe2c6f8252d0598b5fc6de74c0102f.scope - libcontainer container ba629d36f732e713cace7d78a38395a8d1fe2c6f8252d0598b5fc6de74c0102f. Sep 5 23:51:34.893205 systemd[1]: Started cri-containerd-94fa0d62ee559fe77a0206f17b71f60c8ecfb4925d8515ad7a1bb10f9f9e2e3b.scope - libcontainer container 94fa0d62ee559fe77a0206f17b71f60c8ecfb4925d8515ad7a1bb10f9f9e2e3b. Sep 5 23:51:34.918299 kubelet[2515]: I0905 23:51:34.918242 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ba20845f-1330-4e52-86e4-e4a7212b3432-cilium-config-path\") pod \"cilium-operator-5d85765b45-5s7sj\" (UID: \"ba20845f-1330-4e52-86e4-e4a7212b3432\") " pod="kube-system/cilium-operator-5d85765b45-5s7sj" Sep 5 23:51:34.918951 kubelet[2515]: I0905 23:51:34.918929 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chnht\" (UniqueName: \"kubernetes.io/projected/ba20845f-1330-4e52-86e4-e4a7212b3432-kube-api-access-chnht\") pod \"cilium-operator-5d85765b45-5s7sj\" (UID: \"ba20845f-1330-4e52-86e4-e4a7212b3432\") " pod="kube-system/cilium-operator-5d85765b45-5s7sj" Sep 5 23:51:34.921563 containerd[1481]: time="2025-09-05T23:51:34.921524480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rm6gj,Uid:e40befd2-b77d-460e-85bf-733fc8d89a02,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba629d36f732e713cace7d78a38395a8d1fe2c6f8252d0598b5fc6de74c0102f\"" Sep 5 23:51:34.930095 containerd[1481]: time="2025-09-05T23:51:34.929964653Z" level=info msg="CreateContainer within sandbox \"ba629d36f732e713cace7d78a38395a8d1fe2c6f8252d0598b5fc6de74c0102f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 5 23:51:34.938943 containerd[1481]: time="2025-09-05T23:51:34.938546266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kzrc7,Uid:6f8b4002-a581-472e-bded-1c11930cf33b,Namespace:kube-system,Attempt:0,} returns sandbox id \"94fa0d62ee559fe77a0206f17b71f60c8ecfb4925d8515ad7a1bb10f9f9e2e3b\"" Sep 5 23:51:34.941791 containerd[1481]: time="2025-09-05T23:51:34.941587057Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 5 23:51:34.950273 containerd[1481]: time="2025-09-05T23:51:34.950225830Z" level=info msg="CreateContainer within sandbox \"ba629d36f732e713cace7d78a38395a8d1fe2c6f8252d0598b5fc6de74c0102f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1751d168a0d1e4fbf0d154fd7a99080f698c18229bad6233e7ec863b54b41b13\"" Sep 5 23:51:34.952214 containerd[1481]: time="2025-09-05T23:51:34.951054107Z" level=info msg="StartContainer for \"1751d168a0d1e4fbf0d154fd7a99080f698c18229bad6233e7ec863b54b41b13\"" Sep 5 23:51:34.980558 systemd[1]: Started cri-containerd-1751d168a0d1e4fbf0d154fd7a99080f698c18229bad6233e7ec863b54b41b13.scope - libcontainer container 1751d168a0d1e4fbf0d154fd7a99080f698c18229bad6233e7ec863b54b41b13. Sep 5 23:51:35.014724 containerd[1481]: time="2025-09-05T23:51:35.014613470Z" level=info msg="StartContainer for \"1751d168a0d1e4fbf0d154fd7a99080f698c18229bad6233e7ec863b54b41b13\" returns successfully" Sep 5 23:51:35.129544 containerd[1481]: time="2025-09-05T23:51:35.129392488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-5s7sj,Uid:ba20845f-1330-4e52-86e4-e4a7212b3432,Namespace:kube-system,Attempt:0,}" Sep 5 23:51:35.163018 containerd[1481]: time="2025-09-05T23:51:35.162899029Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:51:35.163214 containerd[1481]: time="2025-09-05T23:51:35.162973748Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:51:35.163962 containerd[1481]: time="2025-09-05T23:51:35.163126388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:35.163962 containerd[1481]: time="2025-09-05T23:51:35.163823066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:35.187297 systemd[1]: Started cri-containerd-bc88078acab9915345648d8a24df1ef2dc9a212b6958b7012f3306ed51b7bef7.scope - libcontainer container bc88078acab9915345648d8a24df1ef2dc9a212b6958b7012f3306ed51b7bef7. Sep 5 23:51:35.233200 containerd[1481]: time="2025-09-05T23:51:35.232721701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-5s7sj,Uid:ba20845f-1330-4e52-86e4-e4a7212b3432,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc88078acab9915345648d8a24df1ef2dc9a212b6958b7012f3306ed51b7bef7\"" Sep 5 23:51:35.845091 update_engine[1460]: I20250905 23:51:35.845013 1460 update_attempter.cc:509] Updating boot flags... Sep 5 23:51:35.865589 kubelet[2515]: I0905 23:51:35.863553 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rm6gj" podStartSLOduration=1.863533903 podStartE2EDuration="1.863533903s" podCreationTimestamp="2025-09-05 23:51:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:51:35.863415263 +0000 UTC m=+6.193766592" watchObservedRunningTime="2025-09-05 23:51:35.863533903 +0000 UTC m=+6.193885272" Sep 5 23:51:35.900494 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2880) Sep 5 23:51:35.970222 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2703) Sep 5 23:51:36.037255 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2703) Sep 5 23:51:47.321772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1170618002.mount: Deactivated successfully. Sep 5 23:51:48.831752 containerd[1481]: time="2025-09-05T23:51:48.831652244Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:48.834237 containerd[1481]: time="2025-09-05T23:51:48.834059640Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 5 23:51:48.835683 containerd[1481]: time="2025-09-05T23:51:48.835610237Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:48.838051 containerd[1481]: time="2025-09-05T23:51:48.837771074Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 13.896140577s" Sep 5 23:51:48.838051 containerd[1481]: time="2025-09-05T23:51:48.837821154Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 5 23:51:48.840102 containerd[1481]: time="2025-09-05T23:51:48.839869590Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 5 23:51:48.842330 containerd[1481]: time="2025-09-05T23:51:48.842241786Z" level=info msg="CreateContainer within sandbox \"94fa0d62ee559fe77a0206f17b71f60c8ecfb4925d8515ad7a1bb10f9f9e2e3b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 5 23:51:48.858369 containerd[1481]: time="2025-09-05T23:51:48.856890162Z" level=info msg="CreateContainer within sandbox \"94fa0d62ee559fe77a0206f17b71f60c8ecfb4925d8515ad7a1bb10f9f9e2e3b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"06db07c711712bb2667c4c40e28c0845ea047a8ce25576e3e0d6761422c23f79\"" Sep 5 23:51:48.858369 containerd[1481]: time="2025-09-05T23:51:48.857582961Z" level=info msg="StartContainer for \"06db07c711712bb2667c4c40e28c0845ea047a8ce25576e3e0d6761422c23f79\"" Sep 5 23:51:48.900776 systemd[1]: Started cri-containerd-06db07c711712bb2667c4c40e28c0845ea047a8ce25576e3e0d6761422c23f79.scope - libcontainer container 06db07c711712bb2667c4c40e28c0845ea047a8ce25576e3e0d6761422c23f79. Sep 5 23:51:48.934257 containerd[1481]: time="2025-09-05T23:51:48.934176635Z" level=info msg="StartContainer for \"06db07c711712bb2667c4c40e28c0845ea047a8ce25576e3e0d6761422c23f79\" returns successfully" Sep 5 23:51:48.953604 systemd[1]: cri-containerd-06db07c711712bb2667c4c40e28c0845ea047a8ce25576e3e0d6761422c23f79.scope: Deactivated successfully. Sep 5 23:51:49.064550 containerd[1481]: time="2025-09-05T23:51:49.064218145Z" level=info msg="shim disconnected" id=06db07c711712bb2667c4c40e28c0845ea047a8ce25576e3e0d6761422c23f79 namespace=k8s.io Sep 5 23:51:49.064550 containerd[1481]: time="2025-09-05T23:51:49.064293624Z" level=warning msg="cleaning up after shim disconnected" id=06db07c711712bb2667c4c40e28c0845ea047a8ce25576e3e0d6761422c23f79 namespace=k8s.io Sep 5 23:51:49.064550 containerd[1481]: time="2025-09-05T23:51:49.064312584Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:51:49.854915 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06db07c711712bb2667c4c40e28c0845ea047a8ce25576e3e0d6761422c23f79-rootfs.mount: Deactivated successfully. Sep 5 23:51:49.887880 containerd[1481]: time="2025-09-05T23:51:49.887838040Z" level=info msg="CreateContainer within sandbox \"94fa0d62ee559fe77a0206f17b71f60c8ecfb4925d8515ad7a1bb10f9f9e2e3b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 5 23:51:49.908792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount350820224.mount: Deactivated successfully. Sep 5 23:51:49.919559 containerd[1481]: time="2025-09-05T23:51:49.919110910Z" level=info msg="CreateContainer within sandbox \"94fa0d62ee559fe77a0206f17b71f60c8ecfb4925d8515ad7a1bb10f9f9e2e3b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"008841acabbfd6851640d0dce673bf839a01425395966b1660b24ee4aa6cc78c\"" Sep 5 23:51:49.919839 containerd[1481]: time="2025-09-05T23:51:49.919805029Z" level=info msg="StartContainer for \"008841acabbfd6851640d0dce673bf839a01425395966b1660b24ee4aa6cc78c\"" Sep 5 23:51:49.950411 systemd[1]: Started cri-containerd-008841acabbfd6851640d0dce673bf839a01425395966b1660b24ee4aa6cc78c.scope - libcontainer container 008841acabbfd6851640d0dce673bf839a01425395966b1660b24ee4aa6cc78c. Sep 5 23:51:49.987052 containerd[1481]: time="2025-09-05T23:51:49.986930723Z" level=info msg="StartContainer for \"008841acabbfd6851640d0dce673bf839a01425395966b1660b24ee4aa6cc78c\" returns successfully" Sep 5 23:51:49.992668 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 23:51:49.992929 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 23:51:49.993005 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 5 23:51:50.000820 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 23:51:50.001042 systemd[1]: cri-containerd-008841acabbfd6851640d0dce673bf839a01425395966b1660b24ee4aa6cc78c.scope: Deactivated successfully. Sep 5 23:51:50.025320 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 23:51:50.027424 containerd[1481]: time="2025-09-05T23:51:50.027370221Z" level=info msg="shim disconnected" id=008841acabbfd6851640d0dce673bf839a01425395966b1660b24ee4aa6cc78c namespace=k8s.io Sep 5 23:51:50.027424 containerd[1481]: time="2025-09-05T23:51:50.027424540Z" level=warning msg="cleaning up after shim disconnected" id=008841acabbfd6851640d0dce673bf839a01425395966b1660b24ee4aa6cc78c namespace=k8s.io Sep 5 23:51:50.027715 containerd[1481]: time="2025-09-05T23:51:50.027433140Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:51:50.857161 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-008841acabbfd6851640d0dce673bf839a01425395966b1660b24ee4aa6cc78c-rootfs.mount: Deactivated successfully. Sep 5 23:51:50.903219 containerd[1481]: time="2025-09-05T23:51:50.901365528Z" level=info msg="CreateContainer within sandbox \"94fa0d62ee559fe77a0206f17b71f60c8ecfb4925d8515ad7a1bb10f9f9e2e3b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 5 23:51:50.946539 containerd[1481]: time="2025-09-05T23:51:50.946455499Z" level=info msg="CreateContainer within sandbox \"94fa0d62ee559fe77a0206f17b71f60c8ecfb4925d8515ad7a1bb10f9f9e2e3b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3bc6a5a1606c12100120268a9e4a91eabf29db9fbf5eea86b8b30ac5336b51b5\"" Sep 5 23:51:50.947282 containerd[1481]: time="2025-09-05T23:51:50.947240658Z" level=info msg="StartContainer for \"3bc6a5a1606c12100120268a9e4a91eabf29db9fbf5eea86b8b30ac5336b51b5\"" Sep 5 23:51:50.980431 systemd[1]: Started cri-containerd-3bc6a5a1606c12100120268a9e4a91eabf29db9fbf5eea86b8b30ac5336b51b5.scope - libcontainer container 3bc6a5a1606c12100120268a9e4a91eabf29db9fbf5eea86b8b30ac5336b51b5. Sep 5 23:51:51.008391 containerd[1481]: time="2025-09-05T23:51:51.008037566Z" level=info msg="StartContainer for \"3bc6a5a1606c12100120268a9e4a91eabf29db9fbf5eea86b8b30ac5336b51b5\" returns successfully" Sep 5 23:51:51.013076 systemd[1]: cri-containerd-3bc6a5a1606c12100120268a9e4a91eabf29db9fbf5eea86b8b30ac5336b51b5.scope: Deactivated successfully. Sep 5 23:51:51.040054 containerd[1481]: time="2025-09-05T23:51:51.039970279Z" level=info msg="shim disconnected" id=3bc6a5a1606c12100120268a9e4a91eabf29db9fbf5eea86b8b30ac5336b51b5 namespace=k8s.io Sep 5 23:51:51.040054 containerd[1481]: time="2025-09-05T23:51:51.040031999Z" level=warning msg="cleaning up after shim disconnected" id=3bc6a5a1606c12100120268a9e4a91eabf29db9fbf5eea86b8b30ac5336b51b5 namespace=k8s.io Sep 5 23:51:51.040054 containerd[1481]: time="2025-09-05T23:51:51.040042959Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:51:51.855789 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3bc6a5a1606c12100120268a9e4a91eabf29db9fbf5eea86b8b30ac5336b51b5-rootfs.mount: Deactivated successfully. Sep 5 23:51:51.913542 containerd[1481]: time="2025-09-05T23:51:51.911508278Z" level=info msg="CreateContainer within sandbox \"94fa0d62ee559fe77a0206f17b71f60c8ecfb4925d8515ad7a1bb10f9f9e2e3b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 5 23:51:51.949395 containerd[1481]: time="2025-09-05T23:51:51.949348903Z" level=info msg="CreateContainer within sandbox \"94fa0d62ee559fe77a0206f17b71f60c8ecfb4925d8515ad7a1bb10f9f9e2e3b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fa5490e63f8ed9eb8adce34fb7aa99b9ecda27f8c5f51502dbbcb2b81dbb6d56\"" Sep 5 23:51:51.950229 containerd[1481]: time="2025-09-05T23:51:51.950160382Z" level=info msg="StartContainer for \"fa5490e63f8ed9eb8adce34fb7aa99b9ecda27f8c5f51502dbbcb2b81dbb6d56\"" Sep 5 23:51:51.983394 systemd[1]: Started cri-containerd-fa5490e63f8ed9eb8adce34fb7aa99b9ecda27f8c5f51502dbbcb2b81dbb6d56.scope - libcontainer container fa5490e63f8ed9eb8adce34fb7aa99b9ecda27f8c5f51502dbbcb2b81dbb6d56. Sep 5 23:51:52.025726 systemd[1]: cri-containerd-fa5490e63f8ed9eb8adce34fb7aa99b9ecda27f8c5f51502dbbcb2b81dbb6d56.scope: Deactivated successfully. Sep 5 23:51:52.029648 containerd[1481]: time="2025-09-05T23:51:52.029047187Z" level=info msg="StartContainer for \"fa5490e63f8ed9eb8adce34fb7aa99b9ecda27f8c5f51502dbbcb2b81dbb6d56\" returns successfully" Sep 5 23:51:52.052856 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa5490e63f8ed9eb8adce34fb7aa99b9ecda27f8c5f51502dbbcb2b81dbb6d56-rootfs.mount: Deactivated successfully. Sep 5 23:51:52.058487 containerd[1481]: time="2025-09-05T23:51:52.058398146Z" level=info msg="shim disconnected" id=fa5490e63f8ed9eb8adce34fb7aa99b9ecda27f8c5f51502dbbcb2b81dbb6d56 namespace=k8s.io Sep 5 23:51:52.058487 containerd[1481]: time="2025-09-05T23:51:52.058459105Z" level=warning msg="cleaning up after shim disconnected" id=fa5490e63f8ed9eb8adce34fb7aa99b9ecda27f8c5f51502dbbcb2b81dbb6d56 namespace=k8s.io Sep 5 23:51:52.058487 containerd[1481]: time="2025-09-05T23:51:52.058469545Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:51:52.927252 containerd[1481]: time="2025-09-05T23:51:52.925364717Z" level=info msg="CreateContainer within sandbox \"94fa0d62ee559fe77a0206f17b71f60c8ecfb4925d8515ad7a1bb10f9f9e2e3b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 5 23:51:52.957545 containerd[1481]: time="2025-09-05T23:51:52.957442512Z" level=info msg="CreateContainer within sandbox \"94fa0d62ee559fe77a0206f17b71f60c8ecfb4925d8515ad7a1bb10f9f9e2e3b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"be9aee1e55454a0fdffea97ca6fd2088e945522bf4c9e3243185711927aabcb3\"" Sep 5 23:51:52.958720 containerd[1481]: time="2025-09-05T23:51:52.958283710Z" level=info msg="StartContainer for \"be9aee1e55454a0fdffea97ca6fd2088e945522bf4c9e3243185711927aabcb3\"" Sep 5 23:51:52.997609 systemd[1]: Started cri-containerd-be9aee1e55454a0fdffea97ca6fd2088e945522bf4c9e3243185711927aabcb3.scope - libcontainer container be9aee1e55454a0fdffea97ca6fd2088e945522bf4c9e3243185711927aabcb3. Sep 5 23:51:53.035060 containerd[1481]: time="2025-09-05T23:51:53.034901004Z" level=info msg="StartContainer for \"be9aee1e55454a0fdffea97ca6fd2088e945522bf4c9e3243185711927aabcb3\" returns successfully" Sep 5 23:51:53.181335 kubelet[2515]: I0905 23:51:53.180550 2515 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 5 23:51:53.234684 systemd[1]: Created slice kubepods-burstable-pod6f4230aa_6785_4fde_8f49_e839169cefc8.slice - libcontainer container kubepods-burstable-pod6f4230aa_6785_4fde_8f49_e839169cefc8.slice. Sep 5 23:51:53.244005 systemd[1]: Created slice kubepods-burstable-pod36812c6d_feb3_458e_904a_c81abe3ba996.slice - libcontainer container kubepods-burstable-pod36812c6d_feb3_458e_904a_c81abe3ba996.slice. Sep 5 23:51:53.251228 kubelet[2515]: I0905 23:51:53.251154 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c5zk\" (UniqueName: \"kubernetes.io/projected/36812c6d-feb3-458e-904a-c81abe3ba996-kube-api-access-9c5zk\") pod \"coredns-7c65d6cfc9-zmlwg\" (UID: \"36812c6d-feb3-458e-904a-c81abe3ba996\") " pod="kube-system/coredns-7c65d6cfc9-zmlwg" Sep 5 23:51:53.251385 kubelet[2515]: I0905 23:51:53.251239 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f4230aa-6785-4fde-8f49-e839169cefc8-config-volume\") pod \"coredns-7c65d6cfc9-l9w86\" (UID: \"6f4230aa-6785-4fde-8f49-e839169cefc8\") " pod="kube-system/coredns-7c65d6cfc9-l9w86" Sep 5 23:51:53.251385 kubelet[2515]: I0905 23:51:53.251262 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n54gc\" (UniqueName: \"kubernetes.io/projected/6f4230aa-6785-4fde-8f49-e839169cefc8-kube-api-access-n54gc\") pod \"coredns-7c65d6cfc9-l9w86\" (UID: \"6f4230aa-6785-4fde-8f49-e839169cefc8\") " pod="kube-system/coredns-7c65d6cfc9-l9w86" Sep 5 23:51:53.251385 kubelet[2515]: I0905 23:51:53.251280 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36812c6d-feb3-458e-904a-c81abe3ba996-config-volume\") pod \"coredns-7c65d6cfc9-zmlwg\" (UID: \"36812c6d-feb3-458e-904a-c81abe3ba996\") " pod="kube-system/coredns-7c65d6cfc9-zmlwg" Sep 5 23:51:53.541099 containerd[1481]: time="2025-09-05T23:51:53.540139552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-l9w86,Uid:6f4230aa-6785-4fde-8f49-e839169cefc8,Namespace:kube-system,Attempt:0,}" Sep 5 23:51:53.549576 containerd[1481]: time="2025-09-05T23:51:53.549517819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zmlwg,Uid:36812c6d-feb3-458e-904a-c81abe3ba996,Namespace:kube-system,Attempt:0,}" Sep 5 23:51:54.153373 containerd[1481]: time="2025-09-05T23:51:54.153315840Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:54.154942 containerd[1481]: time="2025-09-05T23:51:54.154767958Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 5 23:51:54.156609 containerd[1481]: time="2025-09-05T23:51:54.156563036Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:54.160535 containerd[1481]: time="2025-09-05T23:51:54.160443911Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 5.320525481s" Sep 5 23:51:54.160535 containerd[1481]: time="2025-09-05T23:51:54.160525031Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 5 23:51:54.164811 containerd[1481]: time="2025-09-05T23:51:54.164752305Z" level=info msg="CreateContainer within sandbox \"bc88078acab9915345648d8a24df1ef2dc9a212b6958b7012f3306ed51b7bef7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 5 23:51:54.182499 containerd[1481]: time="2025-09-05T23:51:54.182323922Z" level=info msg="CreateContainer within sandbox \"bc88078acab9915345648d8a24df1ef2dc9a212b6958b7012f3306ed51b7bef7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"25ae5bc2d2af26f9792c21f0d1850158b1e0f27c200e0b50c51f04be1057317c\"" Sep 5 23:51:54.184123 containerd[1481]: time="2025-09-05T23:51:54.183412521Z" level=info msg="StartContainer for \"25ae5bc2d2af26f9792c21f0d1850158b1e0f27c200e0b50c51f04be1057317c\"" Sep 5 23:51:54.219526 systemd[1]: Started cri-containerd-25ae5bc2d2af26f9792c21f0d1850158b1e0f27c200e0b50c51f04be1057317c.scope - libcontainer container 25ae5bc2d2af26f9792c21f0d1850158b1e0f27c200e0b50c51f04be1057317c. Sep 5 23:51:54.248113 containerd[1481]: time="2025-09-05T23:51:54.248043715Z" level=info msg="StartContainer for \"25ae5bc2d2af26f9792c21f0d1850158b1e0f27c200e0b50c51f04be1057317c\" returns successfully" Sep 5 23:51:54.942309 kubelet[2515]: I0905 23:51:54.942234 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kzrc7" podStartSLOduration=7.043124267 podStartE2EDuration="20.942211437s" podCreationTimestamp="2025-09-05 23:51:34 +0000 UTC" firstStartedPulling="2025-09-05 23:51:34.939830582 +0000 UTC m=+5.270181951" lastFinishedPulling="2025-09-05 23:51:48.838917792 +0000 UTC m=+19.169269121" observedRunningTime="2025-09-05 23:51:53.950305271 +0000 UTC m=+24.280656640" watchObservedRunningTime="2025-09-05 23:51:54.942211437 +0000 UTC m=+25.272562806" Sep 5 23:51:58.261118 systemd-networkd[1377]: cilium_host: Link UP Sep 5 23:51:58.263246 systemd-networkd[1377]: cilium_net: Link UP Sep 5 23:51:58.263251 systemd-networkd[1377]: cilium_net: Gained carrier Sep 5 23:51:58.263466 systemd-networkd[1377]: cilium_host: Gained carrier Sep 5 23:51:58.263612 systemd-networkd[1377]: cilium_host: Gained IPv6LL Sep 5 23:51:58.280384 systemd-networkd[1377]: cilium_net: Gained IPv6LL Sep 5 23:51:58.372046 systemd-networkd[1377]: cilium_vxlan: Link UP Sep 5 23:51:58.372753 systemd-networkd[1377]: cilium_vxlan: Gained carrier Sep 5 23:51:58.661253 kernel: NET: Registered PF_ALG protocol family Sep 5 23:51:59.392917 systemd-networkd[1377]: lxc_health: Link UP Sep 5 23:51:59.403519 systemd-networkd[1377]: lxc_health: Gained carrier Sep 5 23:51:59.631849 kernel: eth0: renamed from tmpd842c Sep 5 23:51:59.632015 systemd-networkd[1377]: lxcbc985c70a5d3: Link UP Sep 5 23:51:59.640291 systemd-networkd[1377]: lxc28b8f1d4b43c: Link UP Sep 5 23:51:59.650081 systemd-networkd[1377]: lxcbc985c70a5d3: Gained carrier Sep 5 23:51:59.652425 kernel: eth0: renamed from tmpf8125 Sep 5 23:51:59.654952 systemd-networkd[1377]: lxc28b8f1d4b43c: Gained carrier Sep 5 23:51:59.955468 systemd-networkd[1377]: cilium_vxlan: Gained IPv6LL Sep 5 23:52:00.787594 systemd-networkd[1377]: lxc28b8f1d4b43c: Gained IPv6LL Sep 5 23:52:00.871233 kubelet[2515]: I0905 23:52:00.871141 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-5s7sj" podStartSLOduration=7.945335804 podStartE2EDuration="26.871122702s" podCreationTimestamp="2025-09-05 23:51:34 +0000 UTC" firstStartedPulling="2025-09-05 23:51:35.235594212 +0000 UTC m=+5.565945581" lastFinishedPulling="2025-09-05 23:51:54.16138111 +0000 UTC m=+24.491732479" observedRunningTime="2025-09-05 23:51:54.943716795 +0000 UTC m=+25.274068164" watchObservedRunningTime="2025-09-05 23:52:00.871122702 +0000 UTC m=+31.201474071" Sep 5 23:52:01.043349 systemd-networkd[1377]: lxcbc985c70a5d3: Gained IPv6LL Sep 5 23:52:01.427454 systemd-networkd[1377]: lxc_health: Gained IPv6LL Sep 5 23:52:03.936244 containerd[1481]: time="2025-09-05T23:52:03.934347730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:52:03.936244 containerd[1481]: time="2025-09-05T23:52:03.934423250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:52:03.936244 containerd[1481]: time="2025-09-05T23:52:03.934449490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:52:03.936244 containerd[1481]: time="2025-09-05T23:52:03.934535970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:52:03.968323 containerd[1481]: time="2025-09-05T23:52:03.965441738Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:52:03.970688 containerd[1481]: time="2025-09-05T23:52:03.970297053Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:52:03.972520 systemd[1]: Started cri-containerd-d842c3c34293509c040fb35350279a9607c183017f371b1ae790df7aafaa453c.scope - libcontainer container d842c3c34293509c040fb35350279a9607c183017f371b1ae790df7aafaa453c. Sep 5 23:52:03.975497 containerd[1481]: time="2025-09-05T23:52:03.971827491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:52:03.978584 containerd[1481]: time="2025-09-05T23:52:03.978456805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:52:04.009544 systemd[1]: run-containerd-runc-k8s.io-f8125433f5ec98ca3134bea0b032ffc3fdc6343e45f1f511b4905c7f71ac6e00-runc.RsLiMY.mount: Deactivated successfully. Sep 5 23:52:04.021726 systemd[1]: Started cri-containerd-f8125433f5ec98ca3134bea0b032ffc3fdc6343e45f1f511b4905c7f71ac6e00.scope - libcontainer container f8125433f5ec98ca3134bea0b032ffc3fdc6343e45f1f511b4905c7f71ac6e00. Sep 5 23:52:04.074174 containerd[1481]: time="2025-09-05T23:52:04.074091349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zmlwg,Uid:36812c6d-feb3-458e-904a-c81abe3ba996,Namespace:kube-system,Attempt:0,} returns sandbox id \"d842c3c34293509c040fb35350279a9607c183017f371b1ae790df7aafaa453c\"" Sep 5 23:52:04.082076 containerd[1481]: time="2025-09-05T23:52:04.081629941Z" level=info msg="CreateContainer within sandbox \"d842c3c34293509c040fb35350279a9607c183017f371b1ae790df7aafaa453c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 23:52:04.087035 containerd[1481]: time="2025-09-05T23:52:04.086589577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-l9w86,Uid:6f4230aa-6785-4fde-8f49-e839169cefc8,Namespace:kube-system,Attempt:0,} returns sandbox id \"f8125433f5ec98ca3134bea0b032ffc3fdc6343e45f1f511b4905c7f71ac6e00\"" Sep 5 23:52:04.092730 containerd[1481]: time="2025-09-05T23:52:04.092266011Z" level=info msg="CreateContainer within sandbox \"f8125433f5ec98ca3134bea0b032ffc3fdc6343e45f1f511b4905c7f71ac6e00\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 23:52:04.110111 containerd[1481]: time="2025-09-05T23:52:04.110050393Z" level=info msg="CreateContainer within sandbox \"d842c3c34293509c040fb35350279a9607c183017f371b1ae790df7aafaa453c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"191f1f48f13b1991c591b8f44c578c3cdddad8731e2e7e47dffbaeaf84aea58a\"" Sep 5 23:52:04.111590 containerd[1481]: time="2025-09-05T23:52:04.111480432Z" level=info msg="StartContainer for \"191f1f48f13b1991c591b8f44c578c3cdddad8731e2e7e47dffbaeaf84aea58a\"" Sep 5 23:52:04.142241 containerd[1481]: time="2025-09-05T23:52:04.141787682Z" level=info msg="CreateContainer within sandbox \"f8125433f5ec98ca3134bea0b032ffc3fdc6343e45f1f511b4905c7f71ac6e00\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1d41ceec237093eed18afc20d7ee84f31c1a8f470b032fabb6a23556701a9bfe\"" Sep 5 23:52:04.145860 containerd[1481]: time="2025-09-05T23:52:04.145606998Z" level=info msg="StartContainer for \"1d41ceec237093eed18afc20d7ee84f31c1a8f470b032fabb6a23556701a9bfe\"" Sep 5 23:52:04.157086 systemd[1]: Started cri-containerd-191f1f48f13b1991c591b8f44c578c3cdddad8731e2e7e47dffbaeaf84aea58a.scope - libcontainer container 191f1f48f13b1991c591b8f44c578c3cdddad8731e2e7e47dffbaeaf84aea58a. Sep 5 23:52:04.188501 systemd[1]: Started cri-containerd-1d41ceec237093eed18afc20d7ee84f31c1a8f470b032fabb6a23556701a9bfe.scope - libcontainer container 1d41ceec237093eed18afc20d7ee84f31c1a8f470b032fabb6a23556701a9bfe. Sep 5 23:52:04.209051 containerd[1481]: time="2025-09-05T23:52:04.208801615Z" level=info msg="StartContainer for \"191f1f48f13b1991c591b8f44c578c3cdddad8731e2e7e47dffbaeaf84aea58a\" returns successfully" Sep 5 23:52:04.231294 containerd[1481]: time="2025-09-05T23:52:04.230956953Z" level=info msg="StartContainer for \"1d41ceec237093eed18afc20d7ee84f31c1a8f470b032fabb6a23556701a9bfe\" returns successfully" Sep 5 23:52:04.981731 kubelet[2515]: I0905 23:52:04.981651 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-zmlwg" podStartSLOduration=30.981630045 podStartE2EDuration="30.981630045s" podCreationTimestamp="2025-09-05 23:51:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:52:04.980748086 +0000 UTC m=+35.311099535" watchObservedRunningTime="2025-09-05 23:52:04.981630045 +0000 UTC m=+35.311981414" Sep 5 23:52:05.000250 kubelet[2515]: I0905 23:52:04.999557 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-l9w86" podStartSLOduration=30.999535708 podStartE2EDuration="30.999535708s" podCreationTimestamp="2025-09-05 23:51:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:52:04.998042349 +0000 UTC m=+35.328393718" watchObservedRunningTime="2025-09-05 23:52:04.999535708 +0000 UTC m=+35.329887077" Sep 5 23:53:54.696641 systemd[1]: Started sshd@7-91.99.146.49:22-139.178.68.195:56996.service - OpenSSH per-connection server daemon (139.178.68.195:56996). Sep 5 23:53:55.698812 sshd[3935]: Accepted publickey for core from 139.178.68.195 port 56996 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:53:55.704080 sshd[3935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:53:55.713121 systemd-logind[1459]: New session 8 of user core. Sep 5 23:53:55.728693 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 5 23:53:56.489606 sshd[3935]: pam_unix(sshd:session): session closed for user core Sep 5 23:53:56.495747 systemd-logind[1459]: Session 8 logged out. Waiting for processes to exit. Sep 5 23:53:56.496258 systemd[1]: sshd@7-91.99.146.49:22-139.178.68.195:56996.service: Deactivated successfully. Sep 5 23:53:56.499912 systemd[1]: session-8.scope: Deactivated successfully. Sep 5 23:53:56.502733 systemd-logind[1459]: Removed session 8. Sep 5 23:54:01.667562 systemd[1]: Started sshd@8-91.99.146.49:22-139.178.68.195:43318.service - OpenSSH per-connection server daemon (139.178.68.195:43318). Sep 5 23:54:02.669797 sshd[3949]: Accepted publickey for core from 139.178.68.195 port 43318 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:54:02.672667 sshd[3949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:02.683207 systemd-logind[1459]: New session 9 of user core. Sep 5 23:54:02.688666 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 5 23:54:03.455580 sshd[3949]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:03.460942 systemd[1]: sshd@8-91.99.146.49:22-139.178.68.195:43318.service: Deactivated successfully. Sep 5 23:54:03.464162 systemd[1]: session-9.scope: Deactivated successfully. Sep 5 23:54:03.467366 systemd-logind[1459]: Session 9 logged out. Waiting for processes to exit. Sep 5 23:54:03.468952 systemd-logind[1459]: Removed session 9. Sep 5 23:54:08.635700 systemd[1]: Started sshd@9-91.99.146.49:22-139.178.68.195:43326.service - OpenSSH per-connection server daemon (139.178.68.195:43326). Sep 5 23:54:09.626568 sshd[3965]: Accepted publickey for core from 139.178.68.195 port 43326 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:54:09.630585 sshd[3965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:09.637876 systemd-logind[1459]: New session 10 of user core. Sep 5 23:54:09.647406 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 5 23:54:10.411879 sshd[3965]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:10.417625 systemd[1]: sshd@9-91.99.146.49:22-139.178.68.195:43326.service: Deactivated successfully. Sep 5 23:54:10.420694 systemd[1]: session-10.scope: Deactivated successfully. Sep 5 23:54:10.423941 systemd-logind[1459]: Session 10 logged out. Waiting for processes to exit. Sep 5 23:54:10.425359 systemd-logind[1459]: Removed session 10. Sep 5 23:54:10.610569 systemd[1]: Started sshd@10-91.99.146.49:22-139.178.68.195:58646.service - OpenSSH per-connection server daemon (139.178.68.195:58646). Sep 5 23:54:11.659579 sshd[3979]: Accepted publickey for core from 139.178.68.195 port 58646 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:54:11.661591 sshd[3979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:11.667372 systemd-logind[1459]: New session 11 of user core. Sep 5 23:54:11.672511 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 5 23:54:12.503782 sshd[3979]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:12.509736 systemd[1]: sshd@10-91.99.146.49:22-139.178.68.195:58646.service: Deactivated successfully. Sep 5 23:54:12.512279 systemd[1]: session-11.scope: Deactivated successfully. Sep 5 23:54:12.514222 systemd-logind[1459]: Session 11 logged out. Waiting for processes to exit. Sep 5 23:54:12.515887 systemd-logind[1459]: Removed session 11. Sep 5 23:54:12.689726 systemd[1]: Started sshd@11-91.99.146.49:22-139.178.68.195:58654.service - OpenSSH per-connection server daemon (139.178.68.195:58654). Sep 5 23:54:13.686416 sshd[3990]: Accepted publickey for core from 139.178.68.195 port 58654 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:54:13.688851 sshd[3990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:13.693826 systemd-logind[1459]: New session 12 of user core. Sep 5 23:54:13.702529 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 5 23:54:14.449112 sshd[3990]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:14.453858 systemd[1]: sshd@11-91.99.146.49:22-139.178.68.195:58654.service: Deactivated successfully. Sep 5 23:54:14.456812 systemd[1]: session-12.scope: Deactivated successfully. Sep 5 23:54:14.460832 systemd-logind[1459]: Session 12 logged out. Waiting for processes to exit. Sep 5 23:54:14.462407 systemd-logind[1459]: Removed session 12. Sep 5 23:54:19.635965 systemd[1]: Started sshd@12-91.99.146.49:22-139.178.68.195:58658.service - OpenSSH per-connection server daemon (139.178.68.195:58658). Sep 5 23:54:20.636488 sshd[4003]: Accepted publickey for core from 139.178.68.195 port 58658 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:54:20.638953 sshd[4003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:20.643459 systemd-logind[1459]: New session 13 of user core. Sep 5 23:54:20.649577 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 5 23:54:21.397054 sshd[4003]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:21.401460 systemd-logind[1459]: Session 13 logged out. Waiting for processes to exit. Sep 5 23:54:21.402237 systemd[1]: sshd@12-91.99.146.49:22-139.178.68.195:58658.service: Deactivated successfully. Sep 5 23:54:21.404801 systemd[1]: session-13.scope: Deactivated successfully. Sep 5 23:54:21.407625 systemd-logind[1459]: Removed session 13. Sep 5 23:54:21.573689 systemd[1]: Started sshd@13-91.99.146.49:22-139.178.68.195:52282.service - OpenSSH per-connection server daemon (139.178.68.195:52282). Sep 5 23:54:22.568081 sshd[4016]: Accepted publickey for core from 139.178.68.195 port 52282 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:54:22.571083 sshd[4016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:22.580897 systemd-logind[1459]: New session 14 of user core. Sep 5 23:54:22.586845 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 5 23:54:23.394622 sshd[4016]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:23.399210 systemd[1]: sshd@13-91.99.146.49:22-139.178.68.195:52282.service: Deactivated successfully. Sep 5 23:54:23.402754 systemd[1]: session-14.scope: Deactivated successfully. Sep 5 23:54:23.406494 systemd-logind[1459]: Session 14 logged out. Waiting for processes to exit. Sep 5 23:54:23.408076 systemd-logind[1459]: Removed session 14. Sep 5 23:54:23.569746 systemd[1]: Started sshd@14-91.99.146.49:22-139.178.68.195:52288.service - OpenSSH per-connection server daemon (139.178.68.195:52288). Sep 5 23:54:24.571559 sshd[4027]: Accepted publickey for core from 139.178.68.195 port 52288 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:54:24.574259 sshd[4027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:24.580271 systemd-logind[1459]: New session 15 of user core. Sep 5 23:54:24.585533 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 5 23:54:26.616997 sshd[4027]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:26.623322 systemd-logind[1459]: Session 15 logged out. Waiting for processes to exit. Sep 5 23:54:26.624485 systemd[1]: sshd@14-91.99.146.49:22-139.178.68.195:52288.service: Deactivated successfully. Sep 5 23:54:26.627092 systemd[1]: session-15.scope: Deactivated successfully. Sep 5 23:54:26.628869 systemd-logind[1459]: Removed session 15. Sep 5 23:54:26.797740 systemd[1]: Started sshd@15-91.99.146.49:22-139.178.68.195:52296.service - OpenSSH per-connection server daemon (139.178.68.195:52296). Sep 5 23:54:27.797335 sshd[4045]: Accepted publickey for core from 139.178.68.195 port 52296 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:54:27.799568 sshd[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:27.805697 systemd-logind[1459]: New session 16 of user core. Sep 5 23:54:27.812480 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 5 23:54:28.666585 sshd[4045]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:28.672331 systemd[1]: sshd@15-91.99.146.49:22-139.178.68.195:52296.service: Deactivated successfully. Sep 5 23:54:28.674864 systemd[1]: session-16.scope: Deactivated successfully. Sep 5 23:54:28.676562 systemd-logind[1459]: Session 16 logged out. Waiting for processes to exit. Sep 5 23:54:28.677895 systemd-logind[1459]: Removed session 16. Sep 5 23:54:28.865516 systemd[1]: Started sshd@16-91.99.146.49:22-139.178.68.195:52300.service - OpenSSH per-connection server daemon (139.178.68.195:52300). Sep 5 23:54:29.917718 sshd[4055]: Accepted publickey for core from 139.178.68.195 port 52300 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:54:29.920645 sshd[4055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:29.925957 systemd-logind[1459]: New session 17 of user core. Sep 5 23:54:29.941641 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 5 23:54:30.724308 sshd[4055]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:30.730033 systemd[1]: sshd@16-91.99.146.49:22-139.178.68.195:52300.service: Deactivated successfully. Sep 5 23:54:30.730278 systemd-logind[1459]: Session 17 logged out. Waiting for processes to exit. Sep 5 23:54:30.732398 systemd[1]: session-17.scope: Deactivated successfully. Sep 5 23:54:30.734366 systemd-logind[1459]: Removed session 17. Sep 5 23:54:35.901051 systemd[1]: Started sshd@17-91.99.146.49:22-139.178.68.195:34662.service - OpenSSH per-connection server daemon (139.178.68.195:34662). Sep 5 23:54:36.896529 sshd[4075]: Accepted publickey for core from 139.178.68.195 port 34662 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:54:36.898913 sshd[4075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:36.903790 systemd-logind[1459]: New session 18 of user core. Sep 5 23:54:36.908414 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 5 23:54:37.651788 sshd[4075]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:37.658344 systemd[1]: sshd@17-91.99.146.49:22-139.178.68.195:34662.service: Deactivated successfully. Sep 5 23:54:37.661056 systemd[1]: session-18.scope: Deactivated successfully. Sep 5 23:54:37.662467 systemd-logind[1459]: Session 18 logged out. Waiting for processes to exit. Sep 5 23:54:37.663920 systemd-logind[1459]: Removed session 18. Sep 5 23:54:42.861634 systemd[1]: Started sshd@18-91.99.146.49:22-139.178.68.195:49764.service - OpenSSH per-connection server daemon (139.178.68.195:49764). Sep 5 23:54:43.966406 sshd[4087]: Accepted publickey for core from 139.178.68.195 port 49764 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:54:43.968837 sshd[4087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:43.975842 systemd-logind[1459]: New session 19 of user core. Sep 5 23:54:43.982487 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 5 23:54:44.803930 sshd[4087]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:44.810145 systemd-logind[1459]: Session 19 logged out. Waiting for processes to exit. Sep 5 23:54:44.810608 systemd[1]: sshd@18-91.99.146.49:22-139.178.68.195:49764.service: Deactivated successfully. Sep 5 23:54:44.813735 systemd[1]: session-19.scope: Deactivated successfully. Sep 5 23:54:44.816592 systemd-logind[1459]: Removed session 19. Sep 5 23:54:44.972514 systemd[1]: Started sshd@19-91.99.146.49:22-139.178.68.195:49768.service - OpenSSH per-connection server daemon (139.178.68.195:49768). Sep 5 23:54:45.961816 sshd[4100]: Accepted publickey for core from 139.178.68.195 port 49768 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:54:45.964991 sshd[4100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:45.970235 systemd-logind[1459]: New session 20 of user core. Sep 5 23:54:45.976498 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 5 23:54:48.232123 containerd[1481]: time="2025-09-05T23:54:48.232026968Z" level=info msg="StopContainer for \"25ae5bc2d2af26f9792c21f0d1850158b1e0f27c200e0b50c51f04be1057317c\" with timeout 30 (s)" Sep 5 23:54:48.233252 containerd[1481]: time="2025-09-05T23:54:48.233108050Z" level=info msg="Stop container \"25ae5bc2d2af26f9792c21f0d1850158b1e0f27c200e0b50c51f04be1057317c\" with signal terminated" Sep 5 23:54:48.250676 systemd[1]: run-containerd-runc-k8s.io-be9aee1e55454a0fdffea97ca6fd2088e945522bf4c9e3243185711927aabcb3-runc.loT2ef.mount: Deactivated successfully. Sep 5 23:54:48.253333 systemd[1]: cri-containerd-25ae5bc2d2af26f9792c21f0d1850158b1e0f27c200e0b50c51f04be1057317c.scope: Deactivated successfully. Sep 5 23:54:48.264717 containerd[1481]: time="2025-09-05T23:54:48.264312231Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 23:54:48.275063 containerd[1481]: time="2025-09-05T23:54:48.274923492Z" level=info msg="StopContainer for \"be9aee1e55454a0fdffea97ca6fd2088e945522bf4c9e3243185711927aabcb3\" with timeout 2 (s)" Sep 5 23:54:48.275540 containerd[1481]: time="2025-09-05T23:54:48.275468253Z" level=info msg="Stop container \"be9aee1e55454a0fdffea97ca6fd2088e945522bf4c9e3243185711927aabcb3\" with signal terminated" Sep 5 23:54:48.286638 systemd-networkd[1377]: lxc_health: Link DOWN Sep 5 23:54:48.286646 systemd-networkd[1377]: lxc_health: Lost carrier Sep 5 23:54:48.301931 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-25ae5bc2d2af26f9792c21f0d1850158b1e0f27c200e0b50c51f04be1057317c-rootfs.mount: Deactivated successfully. Sep 5 23:54:48.312851 systemd[1]: cri-containerd-be9aee1e55454a0fdffea97ca6fd2088e945522bf4c9e3243185711927aabcb3.scope: Deactivated successfully. Sep 5 23:54:48.313171 systemd[1]: cri-containerd-be9aee1e55454a0fdffea97ca6fd2088e945522bf4c9e3243185711927aabcb3.scope: Consumed 7.659s CPU time. Sep 5 23:54:48.326561 containerd[1481]: time="2025-09-05T23:54:48.326287392Z" level=info msg="shim disconnected" id=25ae5bc2d2af26f9792c21f0d1850158b1e0f27c200e0b50c51f04be1057317c namespace=k8s.io Sep 5 23:54:48.326561 containerd[1481]: time="2025-09-05T23:54:48.326359912Z" level=warning msg="cleaning up after shim disconnected" id=25ae5bc2d2af26f9792c21f0d1850158b1e0f27c200e0b50c51f04be1057317c namespace=k8s.io Sep 5 23:54:48.326561 containerd[1481]: time="2025-09-05T23:54:48.326369072Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:54:48.345997 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be9aee1e55454a0fdffea97ca6fd2088e945522bf4c9e3243185711927aabcb3-rootfs.mount: Deactivated successfully. Sep 5 23:54:48.350596 containerd[1481]: time="2025-09-05T23:54:48.350430439Z" level=info msg="shim disconnected" id=be9aee1e55454a0fdffea97ca6fd2088e945522bf4c9e3243185711927aabcb3 namespace=k8s.io Sep 5 23:54:48.350854 containerd[1481]: time="2025-09-05T23:54:48.350834240Z" level=warning msg="cleaning up after shim disconnected" id=be9aee1e55454a0fdffea97ca6fd2088e945522bf4c9e3243185711927aabcb3 namespace=k8s.io Sep 5 23:54:48.351005 containerd[1481]: time="2025-09-05T23:54:48.350991040Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:54:48.355028 containerd[1481]: time="2025-09-05T23:54:48.354985488Z" level=info msg="StopContainer for \"25ae5bc2d2af26f9792c21f0d1850158b1e0f27c200e0b50c51f04be1057317c\" returns successfully" Sep 5 23:54:48.358079 containerd[1481]: time="2025-09-05T23:54:48.356867052Z" level=info msg="StopPodSandbox for \"bc88078acab9915345648d8a24df1ef2dc9a212b6958b7012f3306ed51b7bef7\"" Sep 5 23:54:48.358079 containerd[1481]: time="2025-09-05T23:54:48.356919212Z" level=info msg="Container to stop \"25ae5bc2d2af26f9792c21f0d1850158b1e0f27c200e0b50c51f04be1057317c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 23:54:48.361391 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bc88078acab9915345648d8a24df1ef2dc9a212b6958b7012f3306ed51b7bef7-shm.mount: Deactivated successfully. Sep 5 23:54:48.377673 systemd[1]: cri-containerd-bc88078acab9915345648d8a24df1ef2dc9a212b6958b7012f3306ed51b7bef7.scope: Deactivated successfully. Sep 5 23:54:48.380485 containerd[1481]: time="2025-09-05T23:54:48.380440138Z" level=info msg="StopContainer for \"be9aee1e55454a0fdffea97ca6fd2088e945522bf4c9e3243185711927aabcb3\" returns successfully" Sep 5 23:54:48.381486 containerd[1481]: time="2025-09-05T23:54:48.381263980Z" level=info msg="StopPodSandbox for \"94fa0d62ee559fe77a0206f17b71f60c8ecfb4925d8515ad7a1bb10f9f9e2e3b\"" Sep 5 23:54:48.381486 containerd[1481]: time="2025-09-05T23:54:48.381307380Z" level=info msg="Container to stop \"008841acabbfd6851640d0dce673bf839a01425395966b1660b24ee4aa6cc78c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 23:54:48.381486 containerd[1481]: time="2025-09-05T23:54:48.381320140Z" level=info msg="Container to stop \"3bc6a5a1606c12100120268a9e4a91eabf29db9fbf5eea86b8b30ac5336b51b5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 23:54:48.381486 containerd[1481]: time="2025-09-05T23:54:48.381329540Z" level=info msg="Container to stop \"06db07c711712bb2667c4c40e28c0845ea047a8ce25576e3e0d6761422c23f79\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 23:54:48.381486 containerd[1481]: time="2025-09-05T23:54:48.381341260Z" level=info msg="Container to stop \"fa5490e63f8ed9eb8adce34fb7aa99b9ecda27f8c5f51502dbbcb2b81dbb6d56\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 23:54:48.381486 containerd[1481]: time="2025-09-05T23:54:48.381350780Z" level=info msg="Container to stop \"be9aee1e55454a0fdffea97ca6fd2088e945522bf4c9e3243185711927aabcb3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 23:54:48.388018 systemd[1]: cri-containerd-94fa0d62ee559fe77a0206f17b71f60c8ecfb4925d8515ad7a1bb10f9f9e2e3b.scope: Deactivated successfully. Sep 5 23:54:48.419772 containerd[1481]: time="2025-09-05T23:54:48.419385574Z" level=info msg="shim disconnected" id=bc88078acab9915345648d8a24df1ef2dc9a212b6958b7012f3306ed51b7bef7 namespace=k8s.io Sep 5 23:54:48.419772 containerd[1481]: time="2025-09-05T23:54:48.419497174Z" level=warning msg="cleaning up after shim disconnected" id=bc88078acab9915345648d8a24df1ef2dc9a212b6958b7012f3306ed51b7bef7 namespace=k8s.io Sep 5 23:54:48.419772 containerd[1481]: time="2025-09-05T23:54:48.419517494Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:54:48.421945 containerd[1481]: time="2025-09-05T23:54:48.421441698Z" level=info msg="shim disconnected" id=94fa0d62ee559fe77a0206f17b71f60c8ecfb4925d8515ad7a1bb10f9f9e2e3b namespace=k8s.io Sep 5 23:54:48.421945 containerd[1481]: time="2025-09-05T23:54:48.421527658Z" level=warning msg="cleaning up after shim disconnected" id=94fa0d62ee559fe77a0206f17b71f60c8ecfb4925d8515ad7a1bb10f9f9e2e3b namespace=k8s.io Sep 5 23:54:48.421945 containerd[1481]: time="2025-09-05T23:54:48.421549658Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:54:48.443035 containerd[1481]: time="2025-09-05T23:54:48.442974380Z" level=info msg="TearDown network for sandbox \"bc88078acab9915345648d8a24df1ef2dc9a212b6958b7012f3306ed51b7bef7\" successfully" Sep 5 23:54:48.443726 containerd[1481]: time="2025-09-05T23:54:48.443702862Z" level=info msg="StopPodSandbox for \"bc88078acab9915345648d8a24df1ef2dc9a212b6958b7012f3306ed51b7bef7\" returns successfully" Sep 5 23:54:48.444180 containerd[1481]: time="2025-09-05T23:54:48.443695342Z" level=info msg="TearDown network for sandbox \"94fa0d62ee559fe77a0206f17b71f60c8ecfb4925d8515ad7a1bb10f9f9e2e3b\" successfully" Sep 5 23:54:48.444180 containerd[1481]: time="2025-09-05T23:54:48.444004462Z" level=info msg="StopPodSandbox for \"94fa0d62ee559fe77a0206f17b71f60c8ecfb4925d8515ad7a1bb10f9f9e2e3b\" returns successfully" Sep 5 23:54:48.512963 kubelet[2515]: I0905 23:54:48.511345 2515 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6f8b4002-a581-472e-bded-1c11930cf33b-hostproc\") pod \"6f8b4002-a581-472e-bded-1c11930cf33b\" (UID: \"6f8b4002-a581-472e-bded-1c11930cf33b\") " Sep 5 23:54:48.512963 kubelet[2515]: I0905 23:54:48.511418 2515 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6f8b4002-a581-472e-bded-1c11930cf33b-cilium-run\") pod \"6f8b4002-a581-472e-bded-1c11930cf33b\" (UID: \"6f8b4002-a581-472e-bded-1c11930cf33b\") " Sep 5 23:54:48.512963 kubelet[2515]: I0905 23:54:48.511460 2515 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f8b4002-a581-472e-bded-1c11930cf33b-cilium-config-path\") pod \"6f8b4002-a581-472e-bded-1c11930cf33b\" (UID: \"6f8b4002-a581-472e-bded-1c11930cf33b\") " Sep 5 23:54:48.512963 kubelet[2515]: I0905 23:54:48.511496 2515 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6f8b4002-a581-472e-bded-1c11930cf33b-host-proc-sys-kernel\") pod \"6f8b4002-a581-472e-bded-1c11930cf33b\" (UID: \"6f8b4002-a581-472e-bded-1c11930cf33b\") " Sep 5 23:54:48.512963 kubelet[2515]: I0905 23:54:48.511531 2515 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jdg2\" (UniqueName: \"kubernetes.io/projected/6f8b4002-a581-472e-bded-1c11930cf33b-kube-api-access-8jdg2\") pod \"6f8b4002-a581-472e-bded-1c11930cf33b\" (UID: \"6f8b4002-a581-472e-bded-1c11930cf33b\") " Sep 5 23:54:48.512963 kubelet[2515]: I0905 23:54:48.511567 2515 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6f8b4002-a581-472e-bded-1c11930cf33b-hubble-tls\") pod \"6f8b4002-a581-472e-bded-1c11930cf33b\" (UID: \"6f8b4002-a581-472e-bded-1c11930cf33b\") " Sep 5 23:54:48.513841 kubelet[2515]: I0905 23:54:48.511602 2515 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chnht\" (UniqueName: \"kubernetes.io/projected/ba20845f-1330-4e52-86e4-e4a7212b3432-kube-api-access-chnht\") pod \"ba20845f-1330-4e52-86e4-e4a7212b3432\" (UID: \"ba20845f-1330-4e52-86e4-e4a7212b3432\") " Sep 5 23:54:48.513841 kubelet[2515]: I0905 23:54:48.511631 2515 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f8b4002-a581-472e-bded-1c11930cf33b-xtables-lock\") pod \"6f8b4002-a581-472e-bded-1c11930cf33b\" (UID: \"6f8b4002-a581-472e-bded-1c11930cf33b\") " Sep 5 23:54:48.513841 kubelet[2515]: I0905 23:54:48.511664 2515 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f8b4002-a581-472e-bded-1c11930cf33b-lib-modules\") pod \"6f8b4002-a581-472e-bded-1c11930cf33b\" (UID: \"6f8b4002-a581-472e-bded-1c11930cf33b\") " Sep 5 23:54:48.513841 kubelet[2515]: I0905 23:54:48.511701 2515 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6f8b4002-a581-472e-bded-1c11930cf33b-clustermesh-secrets\") pod \"6f8b4002-a581-472e-bded-1c11930cf33b\" (UID: \"6f8b4002-a581-472e-bded-1c11930cf33b\") " Sep 5 23:54:48.513841 kubelet[2515]: I0905 23:54:48.511731 2515 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6f8b4002-a581-472e-bded-1c11930cf33b-host-proc-sys-net\") pod \"6f8b4002-a581-472e-bded-1c11930cf33b\" (UID: \"6f8b4002-a581-472e-bded-1c11930cf33b\") " Sep 5 23:54:48.513841 kubelet[2515]: I0905 23:54:48.511788 2515 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6f8b4002-a581-472e-bded-1c11930cf33b-cni-path\") pod \"6f8b4002-a581-472e-bded-1c11930cf33b\" (UID: \"6f8b4002-a581-472e-bded-1c11930cf33b\") " Sep 5 23:54:48.514158 kubelet[2515]: I0905 23:54:48.511822 2515 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f8b4002-a581-472e-bded-1c11930cf33b-etc-cni-netd\") pod \"6f8b4002-a581-472e-bded-1c11930cf33b\" (UID: \"6f8b4002-a581-472e-bded-1c11930cf33b\") " Sep 5 23:54:48.514158 kubelet[2515]: I0905 23:54:48.511854 2515 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ba20845f-1330-4e52-86e4-e4a7212b3432-cilium-config-path\") pod \"ba20845f-1330-4e52-86e4-e4a7212b3432\" (UID: \"ba20845f-1330-4e52-86e4-e4a7212b3432\") " Sep 5 23:54:48.514158 kubelet[2515]: I0905 23:54:48.511890 2515 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6f8b4002-a581-472e-bded-1c11930cf33b-cilium-cgroup\") pod \"6f8b4002-a581-472e-bded-1c11930cf33b\" (UID: \"6f8b4002-a581-472e-bded-1c11930cf33b\") " Sep 5 23:54:48.514158 kubelet[2515]: I0905 23:54:48.511925 2515 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6f8b4002-a581-472e-bded-1c11930cf33b-bpf-maps\") pod \"6f8b4002-a581-472e-bded-1c11930cf33b\" (UID: \"6f8b4002-a581-472e-bded-1c11930cf33b\") " Sep 5 23:54:48.514158 kubelet[2515]: I0905 23:54:48.512036 2515 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f8b4002-a581-472e-bded-1c11930cf33b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6f8b4002-a581-472e-bded-1c11930cf33b" (UID: "6f8b4002-a581-472e-bded-1c11930cf33b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 5 23:54:48.514158 kubelet[2515]: I0905 23:54:48.512095 2515 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f8b4002-a581-472e-bded-1c11930cf33b-hostproc" (OuterVolumeSpecName: "hostproc") pod "6f8b4002-a581-472e-bded-1c11930cf33b" (UID: "6f8b4002-a581-472e-bded-1c11930cf33b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 5 23:54:48.514480 kubelet[2515]: I0905 23:54:48.512122 2515 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f8b4002-a581-472e-bded-1c11930cf33b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6f8b4002-a581-472e-bded-1c11930cf33b" (UID: "6f8b4002-a581-472e-bded-1c11930cf33b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 5 23:54:48.514480 kubelet[2515]: I0905 23:54:48.512337 2515 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f8b4002-a581-472e-bded-1c11930cf33b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6f8b4002-a581-472e-bded-1c11930cf33b" (UID: "6f8b4002-a581-472e-bded-1c11930cf33b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 5 23:54:48.514480 kubelet[2515]: I0905 23:54:48.512402 2515 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f8b4002-a581-472e-bded-1c11930cf33b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6f8b4002-a581-472e-bded-1c11930cf33b" (UID: "6f8b4002-a581-472e-bded-1c11930cf33b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 5 23:54:48.519883 kubelet[2515]: I0905 23:54:48.519285 2515 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f8b4002-a581-472e-bded-1c11930cf33b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6f8b4002-a581-472e-bded-1c11930cf33b" (UID: "6f8b4002-a581-472e-bded-1c11930cf33b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 5 23:54:48.519883 kubelet[2515]: I0905 23:54:48.519468 2515 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f8b4002-a581-472e-bded-1c11930cf33b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6f8b4002-a581-472e-bded-1c11930cf33b" (UID: "6f8b4002-a581-472e-bded-1c11930cf33b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 5 23:54:48.519883 kubelet[2515]: I0905 23:54:48.519508 2515 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f8b4002-a581-472e-bded-1c11930cf33b-cni-path" (OuterVolumeSpecName: "cni-path") pod "6f8b4002-a581-472e-bded-1c11930cf33b" (UID: "6f8b4002-a581-472e-bded-1c11930cf33b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 5 23:54:48.519883 kubelet[2515]: I0905 23:54:48.519563 2515 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f8b4002-a581-472e-bded-1c11930cf33b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6f8b4002-a581-472e-bded-1c11930cf33b" (UID: "6f8b4002-a581-472e-bded-1c11930cf33b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 5 23:54:48.520428 kubelet[2515]: I0905 23:54:48.519771 2515 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f8b4002-a581-472e-bded-1c11930cf33b-kube-api-access-8jdg2" (OuterVolumeSpecName: "kube-api-access-8jdg2") pod "6f8b4002-a581-472e-bded-1c11930cf33b" (UID: "6f8b4002-a581-472e-bded-1c11930cf33b"). InnerVolumeSpecName "kube-api-access-8jdg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 5 23:54:48.520671 kubelet[2515]: I0905 23:54:48.520621 2515 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f8b4002-a581-472e-bded-1c11930cf33b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6f8b4002-a581-472e-bded-1c11930cf33b" (UID: "6f8b4002-a581-472e-bded-1c11930cf33b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 5 23:54:48.522471 kubelet[2515]: I0905 23:54:48.522039 2515 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f8b4002-a581-472e-bded-1c11930cf33b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6f8b4002-a581-472e-bded-1c11930cf33b" (UID: "6f8b4002-a581-472e-bded-1c11930cf33b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 5 23:54:48.523909 kubelet[2515]: I0905 23:54:48.523866 2515 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba20845f-1330-4e52-86e4-e4a7212b3432-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ba20845f-1330-4e52-86e4-e4a7212b3432" (UID: "ba20845f-1330-4e52-86e4-e4a7212b3432"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 5 23:54:48.525125 kubelet[2515]: I0905 23:54:48.525090 2515 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba20845f-1330-4e52-86e4-e4a7212b3432-kube-api-access-chnht" (OuterVolumeSpecName: "kube-api-access-chnht") pod "ba20845f-1330-4e52-86e4-e4a7212b3432" (UID: "ba20845f-1330-4e52-86e4-e4a7212b3432"). InnerVolumeSpecName "kube-api-access-chnht". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 5 23:54:48.525125 kubelet[2515]: I0905 23:54:48.525129 2515 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f8b4002-a581-472e-bded-1c11930cf33b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6f8b4002-a581-472e-bded-1c11930cf33b" (UID: "6f8b4002-a581-472e-bded-1c11930cf33b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 5 23:54:48.525734 kubelet[2515]: I0905 23:54:48.525700 2515 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f8b4002-a581-472e-bded-1c11930cf33b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6f8b4002-a581-472e-bded-1c11930cf33b" (UID: "6f8b4002-a581-472e-bded-1c11930cf33b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 5 23:54:48.613553 kubelet[2515]: I0905 23:54:48.613139 2515 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6f8b4002-a581-472e-bded-1c11930cf33b-cilium-cgroup\") on node \"ci-4081-3-5-n-c970465010\" DevicePath \"\"" Sep 5 23:54:48.613553 kubelet[2515]: I0905 23:54:48.613236 2515 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6f8b4002-a581-472e-bded-1c11930cf33b-bpf-maps\") on node \"ci-4081-3-5-n-c970465010\" DevicePath \"\"" Sep 5 23:54:48.613553 kubelet[2515]: I0905 23:54:48.613261 2515 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f8b4002-a581-472e-bded-1c11930cf33b-etc-cni-netd\") on node \"ci-4081-3-5-n-c970465010\" DevicePath \"\"" Sep 5 23:54:48.613553 kubelet[2515]: I0905 23:54:48.613285 2515 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ba20845f-1330-4e52-86e4-e4a7212b3432-cilium-config-path\") on node \"ci-4081-3-5-n-c970465010\" DevicePath \"\"" Sep 5 23:54:48.613553 kubelet[2515]: I0905 23:54:48.613305 2515 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6f8b4002-a581-472e-bded-1c11930cf33b-hostproc\") on node \"ci-4081-3-5-n-c970465010\" DevicePath \"\"" Sep 5 23:54:48.613553 kubelet[2515]: I0905 23:54:48.613324 2515 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6f8b4002-a581-472e-bded-1c11930cf33b-cilium-run\") on node \"ci-4081-3-5-n-c970465010\" DevicePath \"\"" Sep 5 23:54:48.613553 kubelet[2515]: I0905 23:54:48.613344 2515 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6f8b4002-a581-472e-bded-1c11930cf33b-host-proc-sys-kernel\") on node \"ci-4081-3-5-n-c970465010\" DevicePath \"\"" Sep 5 23:54:48.613553 kubelet[2515]: I0905 23:54:48.613364 2515 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jdg2\" (UniqueName: \"kubernetes.io/projected/6f8b4002-a581-472e-bded-1c11930cf33b-kube-api-access-8jdg2\") on node \"ci-4081-3-5-n-c970465010\" DevicePath \"\"" Sep 5 23:54:48.614173 kubelet[2515]: I0905 23:54:48.613385 2515 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f8b4002-a581-472e-bded-1c11930cf33b-cilium-config-path\") on node \"ci-4081-3-5-n-c970465010\" DevicePath \"\"" Sep 5 23:54:48.614173 kubelet[2515]: I0905 23:54:48.613404 2515 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6f8b4002-a581-472e-bded-1c11930cf33b-hubble-tls\") on node \"ci-4081-3-5-n-c970465010\" DevicePath \"\"" Sep 5 23:54:48.614173 kubelet[2515]: I0905 23:54:48.613421 2515 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chnht\" (UniqueName: \"kubernetes.io/projected/ba20845f-1330-4e52-86e4-e4a7212b3432-kube-api-access-chnht\") on node \"ci-4081-3-5-n-c970465010\" DevicePath \"\"" Sep 5 23:54:48.614173 kubelet[2515]: I0905 23:54:48.613440 2515 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f8b4002-a581-472e-bded-1c11930cf33b-lib-modules\") on node \"ci-4081-3-5-n-c970465010\" DevicePath \"\"" Sep 5 23:54:48.614173 kubelet[2515]: I0905 23:54:48.613459 2515 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6f8b4002-a581-472e-bded-1c11930cf33b-clustermesh-secrets\") on node \"ci-4081-3-5-n-c970465010\" DevicePath \"\"" Sep 5 23:54:48.614173 kubelet[2515]: I0905 23:54:48.613478 2515 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f8b4002-a581-472e-bded-1c11930cf33b-xtables-lock\") on node \"ci-4081-3-5-n-c970465010\" DevicePath \"\"" Sep 5 23:54:48.614173 kubelet[2515]: I0905 23:54:48.613498 2515 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6f8b4002-a581-472e-bded-1c11930cf33b-host-proc-sys-net\") on node \"ci-4081-3-5-n-c970465010\" DevicePath \"\"" Sep 5 23:54:48.614173 kubelet[2515]: I0905 23:54:48.613516 2515 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6f8b4002-a581-472e-bded-1c11930cf33b-cni-path\") on node \"ci-4081-3-5-n-c970465010\" DevicePath \"\"" Sep 5 23:54:49.240723 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc88078acab9915345648d8a24df1ef2dc9a212b6958b7012f3306ed51b7bef7-rootfs.mount: Deactivated successfully. Sep 5 23:54:49.241049 systemd[1]: var-lib-kubelet-pods-ba20845f\x2d1330\x2d4e52\x2d86e4\x2de4a7212b3432-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dchnht.mount: Deactivated successfully. Sep 5 23:54:49.241173 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-94fa0d62ee559fe77a0206f17b71f60c8ecfb4925d8515ad7a1bb10f9f9e2e3b-rootfs.mount: Deactivated successfully. Sep 5 23:54:49.241329 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-94fa0d62ee559fe77a0206f17b71f60c8ecfb4925d8515ad7a1bb10f9f9e2e3b-shm.mount: Deactivated successfully. Sep 5 23:54:49.241433 systemd[1]: var-lib-kubelet-pods-6f8b4002\x2da581\x2d472e\x2dbded\x2d1c11930cf33b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8jdg2.mount: Deactivated successfully. Sep 5 23:54:49.241543 systemd[1]: var-lib-kubelet-pods-6f8b4002\x2da581\x2d472e\x2dbded\x2d1c11930cf33b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 5 23:54:49.241641 systemd[1]: var-lib-kubelet-pods-6f8b4002\x2da581\x2d472e\x2dbded\x2d1c11930cf33b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 5 23:54:49.407225 kubelet[2515]: I0905 23:54:49.405615 2515 scope.go:117] "RemoveContainer" containerID="be9aee1e55454a0fdffea97ca6fd2088e945522bf4c9e3243185711927aabcb3" Sep 5 23:54:49.410507 containerd[1481]: time="2025-09-05T23:54:49.410085547Z" level=info msg="RemoveContainer for \"be9aee1e55454a0fdffea97ca6fd2088e945522bf4c9e3243185711927aabcb3\"" Sep 5 23:54:49.415625 systemd[1]: Removed slice kubepods-burstable-pod6f8b4002_a581_472e_bded_1c11930cf33b.slice - libcontainer container kubepods-burstable-pod6f8b4002_a581_472e_bded_1c11930cf33b.slice. Sep 5 23:54:49.415717 systemd[1]: kubepods-burstable-pod6f8b4002_a581_472e_bded_1c11930cf33b.slice: Consumed 7.754s CPU time. Sep 5 23:54:49.419563 containerd[1481]: time="2025-09-05T23:54:49.419309645Z" level=info msg="RemoveContainer for \"be9aee1e55454a0fdffea97ca6fd2088e945522bf4c9e3243185711927aabcb3\" returns successfully" Sep 5 23:54:49.419843 kubelet[2515]: I0905 23:54:49.419709 2515 scope.go:117] "RemoveContainer" containerID="fa5490e63f8ed9eb8adce34fb7aa99b9ecda27f8c5f51502dbbcb2b81dbb6d56" Sep 5 23:54:49.420814 systemd[1]: Removed slice kubepods-besteffort-podba20845f_1330_4e52_86e4_e4a7212b3432.slice - libcontainer container kubepods-besteffort-podba20845f_1330_4e52_86e4_e4a7212b3432.slice. Sep 5 23:54:49.422572 containerd[1481]: time="2025-09-05T23:54:49.422446571Z" level=info msg="RemoveContainer for \"fa5490e63f8ed9eb8adce34fb7aa99b9ecda27f8c5f51502dbbcb2b81dbb6d56\"" Sep 5 23:54:49.427104 containerd[1481]: time="2025-09-05T23:54:49.426977500Z" level=info msg="RemoveContainer for \"fa5490e63f8ed9eb8adce34fb7aa99b9ecda27f8c5f51502dbbcb2b81dbb6d56\" returns successfully" Sep 5 23:54:49.427849 kubelet[2515]: I0905 23:54:49.427827 2515 scope.go:117] "RemoveContainer" containerID="3bc6a5a1606c12100120268a9e4a91eabf29db9fbf5eea86b8b30ac5336b51b5" Sep 5 23:54:49.430482 containerd[1481]: time="2025-09-05T23:54:49.430132706Z" level=info msg="RemoveContainer for \"3bc6a5a1606c12100120268a9e4a91eabf29db9fbf5eea86b8b30ac5336b51b5\"" Sep 5 23:54:49.433806 containerd[1481]: time="2025-09-05T23:54:49.433763233Z" level=info msg="RemoveContainer for \"3bc6a5a1606c12100120268a9e4a91eabf29db9fbf5eea86b8b30ac5336b51b5\" returns successfully" Sep 5 23:54:49.434177 kubelet[2515]: I0905 23:54:49.434154 2515 scope.go:117] "RemoveContainer" containerID="008841acabbfd6851640d0dce673bf839a01425395966b1660b24ee4aa6cc78c" Sep 5 23:54:49.439623 containerd[1481]: time="2025-09-05T23:54:49.439200044Z" level=info msg="RemoveContainer for \"008841acabbfd6851640d0dce673bf839a01425395966b1660b24ee4aa6cc78c\"" Sep 5 23:54:49.458352 containerd[1481]: time="2025-09-05T23:54:49.458292441Z" level=info msg="RemoveContainer for \"008841acabbfd6851640d0dce673bf839a01425395966b1660b24ee4aa6cc78c\" returns successfully" Sep 5 23:54:49.458819 kubelet[2515]: I0905 23:54:49.458681 2515 scope.go:117] "RemoveContainer" containerID="06db07c711712bb2667c4c40e28c0845ea047a8ce25576e3e0d6761422c23f79" Sep 5 23:54:49.462152 containerd[1481]: time="2025-09-05T23:54:49.462102808Z" level=info msg="RemoveContainer for \"06db07c711712bb2667c4c40e28c0845ea047a8ce25576e3e0d6761422c23f79\"" Sep 5 23:54:49.465842 containerd[1481]: time="2025-09-05T23:54:49.465727215Z" level=info msg="RemoveContainer for \"06db07c711712bb2667c4c40e28c0845ea047a8ce25576e3e0d6761422c23f79\" returns successfully" Sep 5 23:54:49.466219 kubelet[2515]: I0905 23:54:49.466157 2515 scope.go:117] "RemoveContainer" containerID="25ae5bc2d2af26f9792c21f0d1850158b1e0f27c200e0b50c51f04be1057317c" Sep 5 23:54:49.467575 containerd[1481]: time="2025-09-05T23:54:49.467534979Z" level=info msg="RemoveContainer for \"25ae5bc2d2af26f9792c21f0d1850158b1e0f27c200e0b50c51f04be1057317c\"" Sep 5 23:54:49.470597 containerd[1481]: time="2025-09-05T23:54:49.470496985Z" level=info msg="RemoveContainer for \"25ae5bc2d2af26f9792c21f0d1850158b1e0f27c200e0b50c51f04be1057317c\" returns successfully" Sep 5 23:54:49.796242 kubelet[2515]: I0905 23:54:49.794852 2515 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f8b4002-a581-472e-bded-1c11930cf33b" path="/var/lib/kubelet/pods/6f8b4002-a581-472e-bded-1c11930cf33b/volumes" Sep 5 23:54:49.797262 kubelet[2515]: I0905 23:54:49.797181 2515 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba20845f-1330-4e52-86e4-e4a7212b3432" path="/var/lib/kubelet/pods/ba20845f-1330-4e52-86e4-e4a7212b3432/volumes" Sep 5 23:54:49.920238 kubelet[2515]: E0905 23:54:49.920152 2515 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 5 23:54:50.322643 sshd[4100]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:50.328327 systemd-logind[1459]: Session 20 logged out. Waiting for processes to exit. Sep 5 23:54:50.329311 systemd[1]: sshd@19-91.99.146.49:22-139.178.68.195:49768.service: Deactivated successfully. Sep 5 23:54:50.332119 systemd[1]: session-20.scope: Deactivated successfully. Sep 5 23:54:50.332371 systemd[1]: session-20.scope: Consumed 1.090s CPU time. Sep 5 23:54:50.333786 systemd-logind[1459]: Removed session 20. Sep 5 23:54:50.494254 systemd[1]: Started sshd@20-91.99.146.49:22-139.178.68.195:53878.service - OpenSSH per-connection server daemon (139.178.68.195:53878). Sep 5 23:54:51.501381 sshd[4261]: Accepted publickey for core from 139.178.68.195 port 53878 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:54:51.503815 sshd[4261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:51.509986 systemd-logind[1459]: New session 21 of user core. Sep 5 23:54:51.517569 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 5 23:54:53.189569 kubelet[2515]: E0905 23:54:53.189183 2515 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6f8b4002-a581-472e-bded-1c11930cf33b" containerName="mount-cgroup" Sep 5 23:54:53.189569 kubelet[2515]: E0905 23:54:53.189246 2515 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6f8b4002-a581-472e-bded-1c11930cf33b" containerName="apply-sysctl-overwrites" Sep 5 23:54:53.189569 kubelet[2515]: E0905 23:54:53.189255 2515 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6f8b4002-a581-472e-bded-1c11930cf33b" containerName="mount-bpf-fs" Sep 5 23:54:53.189569 kubelet[2515]: E0905 23:54:53.189263 2515 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6f8b4002-a581-472e-bded-1c11930cf33b" containerName="clean-cilium-state" Sep 5 23:54:53.189569 kubelet[2515]: E0905 23:54:53.189280 2515 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6f8b4002-a581-472e-bded-1c11930cf33b" containerName="cilium-agent" Sep 5 23:54:53.189569 kubelet[2515]: E0905 23:54:53.189289 2515 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ba20845f-1330-4e52-86e4-e4a7212b3432" containerName="cilium-operator" Sep 5 23:54:53.189569 kubelet[2515]: I0905 23:54:53.189326 2515 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f8b4002-a581-472e-bded-1c11930cf33b" containerName="cilium-agent" Sep 5 23:54:53.189569 kubelet[2515]: I0905 23:54:53.189335 2515 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba20845f-1330-4e52-86e4-e4a7212b3432" containerName="cilium-operator" Sep 5 23:54:53.203485 systemd[1]: Created slice kubepods-burstable-podab9569c2_f44b_4ead_9d72_375d4f62c765.slice - libcontainer container kubepods-burstable-podab9569c2_f44b_4ead_9d72_375d4f62c765.slice. Sep 5 23:54:53.244239 kubelet[2515]: I0905 23:54:53.244174 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ab9569c2-f44b-4ead-9d72-375d4f62c765-cni-path\") pod \"cilium-s5x8w\" (UID: \"ab9569c2-f44b-4ead-9d72-375d4f62c765\") " pod="kube-system/cilium-s5x8w" Sep 5 23:54:53.244239 kubelet[2515]: I0905 23:54:53.244239 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ab9569c2-f44b-4ead-9d72-375d4f62c765-host-proc-sys-net\") pod \"cilium-s5x8w\" (UID: \"ab9569c2-f44b-4ead-9d72-375d4f62c765\") " pod="kube-system/cilium-s5x8w" Sep 5 23:54:53.244417 kubelet[2515]: I0905 23:54:53.244265 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vvm4\" (UniqueName: \"kubernetes.io/projected/ab9569c2-f44b-4ead-9d72-375d4f62c765-kube-api-access-7vvm4\") pod \"cilium-s5x8w\" (UID: \"ab9569c2-f44b-4ead-9d72-375d4f62c765\") " pod="kube-system/cilium-s5x8w" Sep 5 23:54:53.244417 kubelet[2515]: I0905 23:54:53.244287 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab9569c2-f44b-4ead-9d72-375d4f62c765-lib-modules\") pod \"cilium-s5x8w\" (UID: \"ab9569c2-f44b-4ead-9d72-375d4f62c765\") " pod="kube-system/cilium-s5x8w" Sep 5 23:54:53.244417 kubelet[2515]: I0905 23:54:53.244306 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ab9569c2-f44b-4ead-9d72-375d4f62c765-cilium-run\") pod \"cilium-s5x8w\" (UID: \"ab9569c2-f44b-4ead-9d72-375d4f62c765\") " pod="kube-system/cilium-s5x8w" Sep 5 23:54:53.244417 kubelet[2515]: I0905 23:54:53.244326 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ab9569c2-f44b-4ead-9d72-375d4f62c765-cilium-ipsec-secrets\") pod \"cilium-s5x8w\" (UID: \"ab9569c2-f44b-4ead-9d72-375d4f62c765\") " pod="kube-system/cilium-s5x8w" Sep 5 23:54:53.244417 kubelet[2515]: I0905 23:54:53.244345 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ab9569c2-f44b-4ead-9d72-375d4f62c765-etc-cni-netd\") pod \"cilium-s5x8w\" (UID: \"ab9569c2-f44b-4ead-9d72-375d4f62c765\") " pod="kube-system/cilium-s5x8w" Sep 5 23:54:53.244417 kubelet[2515]: I0905 23:54:53.244362 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ab9569c2-f44b-4ead-9d72-375d4f62c765-cilium-config-path\") pod \"cilium-s5x8w\" (UID: \"ab9569c2-f44b-4ead-9d72-375d4f62c765\") " pod="kube-system/cilium-s5x8w" Sep 5 23:54:53.244566 kubelet[2515]: I0905 23:54:53.244378 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ab9569c2-f44b-4ead-9d72-375d4f62c765-hostproc\") pod \"cilium-s5x8w\" (UID: \"ab9569c2-f44b-4ead-9d72-375d4f62c765\") " pod="kube-system/cilium-s5x8w" Sep 5 23:54:53.244566 kubelet[2515]: I0905 23:54:53.244394 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ab9569c2-f44b-4ead-9d72-375d4f62c765-cilium-cgroup\") pod \"cilium-s5x8w\" (UID: \"ab9569c2-f44b-4ead-9d72-375d4f62c765\") " pod="kube-system/cilium-s5x8w" Sep 5 23:54:53.244566 kubelet[2515]: I0905 23:54:53.244412 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab9569c2-f44b-4ead-9d72-375d4f62c765-xtables-lock\") pod \"cilium-s5x8w\" (UID: \"ab9569c2-f44b-4ead-9d72-375d4f62c765\") " pod="kube-system/cilium-s5x8w" Sep 5 23:54:53.244566 kubelet[2515]: I0905 23:54:53.244431 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ab9569c2-f44b-4ead-9d72-375d4f62c765-clustermesh-secrets\") pod \"cilium-s5x8w\" (UID: \"ab9569c2-f44b-4ead-9d72-375d4f62c765\") " pod="kube-system/cilium-s5x8w" Sep 5 23:54:53.244566 kubelet[2515]: I0905 23:54:53.244450 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ab9569c2-f44b-4ead-9d72-375d4f62c765-hubble-tls\") pod \"cilium-s5x8w\" (UID: \"ab9569c2-f44b-4ead-9d72-375d4f62c765\") " pod="kube-system/cilium-s5x8w" Sep 5 23:54:53.244566 kubelet[2515]: I0905 23:54:53.244468 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ab9569c2-f44b-4ead-9d72-375d4f62c765-host-proc-sys-kernel\") pod \"cilium-s5x8w\" (UID: \"ab9569c2-f44b-4ead-9d72-375d4f62c765\") " pod="kube-system/cilium-s5x8w" Sep 5 23:54:53.244696 kubelet[2515]: I0905 23:54:53.244484 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ab9569c2-f44b-4ead-9d72-375d4f62c765-bpf-maps\") pod \"cilium-s5x8w\" (UID: \"ab9569c2-f44b-4ead-9d72-375d4f62c765\") " pod="kube-system/cilium-s5x8w" Sep 5 23:54:53.372611 sshd[4261]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:53.382377 systemd[1]: sshd@20-91.99.146.49:22-139.178.68.195:53878.service: Deactivated successfully. Sep 5 23:54:53.385462 systemd[1]: session-21.scope: Deactivated successfully. Sep 5 23:54:53.385966 systemd[1]: session-21.scope: Consumed 1.055s CPU time. Sep 5 23:54:53.387671 systemd-logind[1459]: Session 21 logged out. Waiting for processes to exit. Sep 5 23:54:53.389360 systemd-logind[1459]: Removed session 21. Sep 5 23:54:53.509569 containerd[1481]: time="2025-09-05T23:54:53.509318157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s5x8w,Uid:ab9569c2-f44b-4ead-9d72-375d4f62c765,Namespace:kube-system,Attempt:0,}" Sep 5 23:54:53.531922 containerd[1481]: time="2025-09-05T23:54:53.531578600Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:54:53.531922 containerd[1481]: time="2025-09-05T23:54:53.531644640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:54:53.531922 containerd[1481]: time="2025-09-05T23:54:53.531663680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:54:53.531922 containerd[1481]: time="2025-09-05T23:54:53.531772080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:54:53.550453 systemd[1]: Started sshd@21-91.99.146.49:22-139.178.68.195:53882.service - OpenSSH per-connection server daemon (139.178.68.195:53882). Sep 5 23:54:53.556912 systemd[1]: Started cri-containerd-269d8b42d645f79ee023b3cb67d271b872e39d2e93440c41da8104b31ef253b6.scope - libcontainer container 269d8b42d645f79ee023b3cb67d271b872e39d2e93440c41da8104b31ef253b6. Sep 5 23:54:53.586821 containerd[1481]: time="2025-09-05T23:54:53.586778786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s5x8w,Uid:ab9569c2-f44b-4ead-9d72-375d4f62c765,Namespace:kube-system,Attempt:0,} returns sandbox id \"269d8b42d645f79ee023b3cb67d271b872e39d2e93440c41da8104b31ef253b6\"" Sep 5 23:54:53.593492 containerd[1481]: time="2025-09-05T23:54:53.593412319Z" level=info msg="CreateContainer within sandbox \"269d8b42d645f79ee023b3cb67d271b872e39d2e93440c41da8104b31ef253b6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 5 23:54:53.606272 containerd[1481]: time="2025-09-05T23:54:53.605772942Z" level=info msg="CreateContainer within sandbox \"269d8b42d645f79ee023b3cb67d271b872e39d2e93440c41da8104b31ef253b6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"46c25deea80b62e34e99b12065ab3f812d4c976a9831573353381b0a18ceafe5\"" Sep 5 23:54:53.607212 containerd[1481]: time="2025-09-05T23:54:53.606546344Z" level=info msg="StartContainer for \"46c25deea80b62e34e99b12065ab3f812d4c976a9831573353381b0a18ceafe5\"" Sep 5 23:54:53.640415 systemd[1]: Started cri-containerd-46c25deea80b62e34e99b12065ab3f812d4c976a9831573353381b0a18ceafe5.scope - libcontainer container 46c25deea80b62e34e99b12065ab3f812d4c976a9831573353381b0a18ceafe5. Sep 5 23:54:53.675847 containerd[1481]: time="2025-09-05T23:54:53.675795277Z" level=info msg="StartContainer for \"46c25deea80b62e34e99b12065ab3f812d4c976a9831573353381b0a18ceafe5\" returns successfully" Sep 5 23:54:53.686407 systemd[1]: cri-containerd-46c25deea80b62e34e99b12065ab3f812d4c976a9831573353381b0a18ceafe5.scope: Deactivated successfully. Sep 5 23:54:53.721023 containerd[1481]: time="2025-09-05T23:54:53.720910124Z" level=info msg="shim disconnected" id=46c25deea80b62e34e99b12065ab3f812d4c976a9831573353381b0a18ceafe5 namespace=k8s.io Sep 5 23:54:53.721023 containerd[1481]: time="2025-09-05T23:54:53.721022724Z" level=warning msg="cleaning up after shim disconnected" id=46c25deea80b62e34e99b12065ab3f812d4c976a9831573353381b0a18ceafe5 namespace=k8s.io Sep 5 23:54:53.721279 containerd[1481]: time="2025-09-05T23:54:53.721040364Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:54:53.777972 kubelet[2515]: I0905 23:54:53.777740 2515 setters.go:600] "Node became not ready" node="ci-4081-3-5-n-c970465010" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-05T23:54:53Z","lastTransitionTime":"2025-09-05T23:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 5 23:54:54.435162 containerd[1481]: time="2025-09-05T23:54:54.435102894Z" level=info msg="CreateContainer within sandbox \"269d8b42d645f79ee023b3cb67d271b872e39d2e93440c41da8104b31ef253b6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 5 23:54:54.457574 containerd[1481]: time="2025-09-05T23:54:54.457518057Z" level=info msg="CreateContainer within sandbox \"269d8b42d645f79ee023b3cb67d271b872e39d2e93440c41da8104b31ef253b6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8dffe5ae43d6dfe302c9317e43c039345ac6e19a26b34264863dd04f786a72eb\"" Sep 5 23:54:54.459398 containerd[1481]: time="2025-09-05T23:54:54.458600219Z" level=info msg="StartContainer for \"8dffe5ae43d6dfe302c9317e43c039345ac6e19a26b34264863dd04f786a72eb\"" Sep 5 23:54:54.495729 systemd[1]: Started cri-containerd-8dffe5ae43d6dfe302c9317e43c039345ac6e19a26b34264863dd04f786a72eb.scope - libcontainer container 8dffe5ae43d6dfe302c9317e43c039345ac6e19a26b34264863dd04f786a72eb. Sep 5 23:54:54.530783 containerd[1481]: time="2025-09-05T23:54:54.530690117Z" level=info msg="StartContainer for \"8dffe5ae43d6dfe302c9317e43c039345ac6e19a26b34264863dd04f786a72eb\" returns successfully" Sep 5 23:54:54.540671 systemd[1]: cri-containerd-8dffe5ae43d6dfe302c9317e43c039345ac6e19a26b34264863dd04f786a72eb.scope: Deactivated successfully. Sep 5 23:54:54.545225 sshd[4304]: Accepted publickey for core from 139.178.68.195 port 53882 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:54:54.549806 sshd[4304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:54.560357 systemd-logind[1459]: New session 22 of user core. Sep 5 23:54:54.565408 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 5 23:54:54.591314 containerd[1481]: time="2025-09-05T23:54:54.591147233Z" level=info msg="shim disconnected" id=8dffe5ae43d6dfe302c9317e43c039345ac6e19a26b34264863dd04f786a72eb namespace=k8s.io Sep 5 23:54:54.591612 containerd[1481]: time="2025-09-05T23:54:54.591326713Z" level=warning msg="cleaning up after shim disconnected" id=8dffe5ae43d6dfe302c9317e43c039345ac6e19a26b34264863dd04f786a72eb namespace=k8s.io Sep 5 23:54:54.591612 containerd[1481]: time="2025-09-05T23:54:54.591351713Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:54:54.922540 kubelet[2515]: E0905 23:54:54.921861 2515 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 5 23:54:55.235539 sshd[4304]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:55.241162 systemd[1]: sshd@21-91.99.146.49:22-139.178.68.195:53882.service: Deactivated successfully. Sep 5 23:54:55.244782 systemd[1]: session-22.scope: Deactivated successfully. Sep 5 23:54:55.246316 systemd-logind[1459]: Session 22 logged out. Waiting for processes to exit. Sep 5 23:54:55.247518 systemd-logind[1459]: Removed session 22. Sep 5 23:54:55.354279 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8dffe5ae43d6dfe302c9317e43c039345ac6e19a26b34264863dd04f786a72eb-rootfs.mount: Deactivated successfully. Sep 5 23:54:55.440954 containerd[1481]: time="2025-09-05T23:54:55.440905699Z" level=info msg="CreateContainer within sandbox \"269d8b42d645f79ee023b3cb67d271b872e39d2e93440c41da8104b31ef253b6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 5 23:54:55.442651 systemd[1]: Started sshd@22-91.99.146.49:22-139.178.68.195:53894.service - OpenSSH per-connection server daemon (139.178.68.195:53894). Sep 5 23:54:55.467105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1544421388.mount: Deactivated successfully. Sep 5 23:54:55.474063 containerd[1481]: time="2025-09-05T23:54:55.473989442Z" level=info msg="CreateContainer within sandbox \"269d8b42d645f79ee023b3cb67d271b872e39d2e93440c41da8104b31ef253b6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"96774f6b00151ff997f876337fb0dc6bb663f1b326a06e91ea623428c4500863\"" Sep 5 23:54:55.477237 containerd[1481]: time="2025-09-05T23:54:55.475807286Z" level=info msg="StartContainer for \"96774f6b00151ff997f876337fb0dc6bb663f1b326a06e91ea623428c4500863\"" Sep 5 23:54:55.508423 systemd[1]: Started cri-containerd-96774f6b00151ff997f876337fb0dc6bb663f1b326a06e91ea623428c4500863.scope - libcontainer container 96774f6b00151ff997f876337fb0dc6bb663f1b326a06e91ea623428c4500863. Sep 5 23:54:55.541421 containerd[1481]: time="2025-09-05T23:54:55.541379371Z" level=info msg="StartContainer for \"96774f6b00151ff997f876337fb0dc6bb663f1b326a06e91ea623428c4500863\" returns successfully" Sep 5 23:54:55.544799 systemd[1]: cri-containerd-96774f6b00151ff997f876337fb0dc6bb663f1b326a06e91ea623428c4500863.scope: Deactivated successfully. Sep 5 23:54:55.574005 containerd[1481]: time="2025-09-05T23:54:55.573780393Z" level=info msg="shim disconnected" id=96774f6b00151ff997f876337fb0dc6bb663f1b326a06e91ea623428c4500863 namespace=k8s.io Sep 5 23:54:55.574005 containerd[1481]: time="2025-09-05T23:54:55.573837553Z" level=warning msg="cleaning up after shim disconnected" id=96774f6b00151ff997f876337fb0dc6bb663f1b326a06e91ea623428c4500863 namespace=k8s.io Sep 5 23:54:55.574005 containerd[1481]: time="2025-09-05T23:54:55.573846233Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:54:56.354289 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96774f6b00151ff997f876337fb0dc6bb663f1b326a06e91ea623428c4500863-rootfs.mount: Deactivated successfully. Sep 5 23:54:56.449441 containerd[1481]: time="2025-09-05T23:54:56.449311223Z" level=info msg="CreateContainer within sandbox \"269d8b42d645f79ee023b3cb67d271b872e39d2e93440c41da8104b31ef253b6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 5 23:54:56.465820 containerd[1481]: time="2025-09-05T23:54:56.465610094Z" level=info msg="CreateContainer within sandbox \"269d8b42d645f79ee023b3cb67d271b872e39d2e93440c41da8104b31ef253b6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"08f0a6cbf568db59d8b2670d65e8fd4c3f489ae294a635eca6b196a7f1e75b2c\"" Sep 5 23:54:56.466657 containerd[1481]: time="2025-09-05T23:54:56.466625176Z" level=info msg="StartContainer for \"08f0a6cbf568db59d8b2670d65e8fd4c3f489ae294a635eca6b196a7f1e75b2c\"" Sep 5 23:54:56.499631 systemd[1]: Started cri-containerd-08f0a6cbf568db59d8b2670d65e8fd4c3f489ae294a635eca6b196a7f1e75b2c.scope - libcontainer container 08f0a6cbf568db59d8b2670d65e8fd4c3f489ae294a635eca6b196a7f1e75b2c. Sep 5 23:54:56.526800 systemd[1]: cri-containerd-08f0a6cbf568db59d8b2670d65e8fd4c3f489ae294a635eca6b196a7f1e75b2c.scope: Deactivated successfully. Sep 5 23:54:56.528735 containerd[1481]: time="2025-09-05T23:54:56.528646534Z" level=info msg="StartContainer for \"08f0a6cbf568db59d8b2670d65e8fd4c3f489ae294a635eca6b196a7f1e75b2c\" returns successfully" Sep 5 23:54:56.543729 sshd[4449]: Accepted publickey for core from 139.178.68.195 port 53894 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:54:56.545948 sshd[4449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:56.552585 systemd-logind[1459]: New session 23 of user core. Sep 5 23:54:56.556374 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 5 23:54:56.559295 containerd[1481]: time="2025-09-05T23:54:56.559037712Z" level=info msg="shim disconnected" id=08f0a6cbf568db59d8b2670d65e8fd4c3f489ae294a635eca6b196a7f1e75b2c namespace=k8s.io Sep 5 23:54:56.559295 containerd[1481]: time="2025-09-05T23:54:56.559090992Z" level=warning msg="cleaning up after shim disconnected" id=08f0a6cbf568db59d8b2670d65e8fd4c3f489ae294a635eca6b196a7f1e75b2c namespace=k8s.io Sep 5 23:54:56.559295 containerd[1481]: time="2025-09-05T23:54:56.559099233Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:54:57.354499 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-08f0a6cbf568db59d8b2670d65e8fd4c3f489ae294a635eca6b196a7f1e75b2c-rootfs.mount: Deactivated successfully. Sep 5 23:54:57.454392 containerd[1481]: time="2025-09-05T23:54:57.454305056Z" level=info msg="CreateContainer within sandbox \"269d8b42d645f79ee023b3cb67d271b872e39d2e93440c41da8104b31ef253b6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 5 23:54:57.472749 containerd[1481]: time="2025-09-05T23:54:57.472474930Z" level=info msg="CreateContainer within sandbox \"269d8b42d645f79ee023b3cb67d271b872e39d2e93440c41da8104b31ef253b6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7aa0e69923339d6fed7836f0a42106673c9ccc3df3e9372e96b2202faa1745eb\"" Sep 5 23:54:57.474166 containerd[1481]: time="2025-09-05T23:54:57.474135613Z" level=info msg="StartContainer for \"7aa0e69923339d6fed7836f0a42106673c9ccc3df3e9372e96b2202faa1745eb\"" Sep 5 23:54:57.510491 systemd[1]: Started cri-containerd-7aa0e69923339d6fed7836f0a42106673c9ccc3df3e9372e96b2202faa1745eb.scope - libcontainer container 7aa0e69923339d6fed7836f0a42106673c9ccc3df3e9372e96b2202faa1745eb. Sep 5 23:54:57.543083 containerd[1481]: time="2025-09-05T23:54:57.542859384Z" level=info msg="StartContainer for \"7aa0e69923339d6fed7836f0a42106673c9ccc3df3e9372e96b2202faa1745eb\" returns successfully" Sep 5 23:54:57.847236 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 5 23:54:58.480315 kubelet[2515]: I0905 23:54:58.480257 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-s5x8w" podStartSLOduration=5.480238083 podStartE2EDuration="5.480238083s" podCreationTimestamp="2025-09-05 23:54:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:54:58.479424641 +0000 UTC m=+208.809776010" watchObservedRunningTime="2025-09-05 23:54:58.480238083 +0000 UTC m=+208.810589452" Sep 5 23:54:59.304825 systemd[1]: run-containerd-runc-k8s.io-7aa0e69923339d6fed7836f0a42106673c9ccc3df3e9372e96b2202faa1745eb-runc.0ZEQSo.mount: Deactivated successfully. Sep 5 23:55:00.892376 systemd-networkd[1377]: lxc_health: Link UP Sep 5 23:55:00.906907 systemd-networkd[1377]: lxc_health: Gained carrier Sep 5 23:55:02.163341 systemd-networkd[1377]: lxc_health: Gained IPv6LL Sep 5 23:55:08.267941 sshd[4449]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:08.275757 systemd[1]: sshd@22-91.99.146.49:22-139.178.68.195:53894.service: Deactivated successfully. Sep 5 23:55:08.278946 systemd[1]: session-23.scope: Deactivated successfully. Sep 5 23:55:08.283149 systemd-logind[1459]: Session 23 logged out. Waiting for processes to exit. Sep 5 23:55:08.284901 systemd-logind[1459]: Removed session 23. Sep 5 23:55:29.810631 containerd[1481]: time="2025-09-05T23:55:29.810528052Z" level=info msg="StopPodSandbox for \"94fa0d62ee559fe77a0206f17b71f60c8ecfb4925d8515ad7a1bb10f9f9e2e3b\"" Sep 5 23:55:29.811717 containerd[1481]: time="2025-09-05T23:55:29.811484974Z" level=info msg="TearDown network for sandbox \"94fa0d62ee559fe77a0206f17b71f60c8ecfb4925d8515ad7a1bb10f9f9e2e3b\" successfully" Sep 5 23:55:29.811717 containerd[1481]: time="2025-09-05T23:55:29.811514134Z" level=info msg="StopPodSandbox for \"94fa0d62ee559fe77a0206f17b71f60c8ecfb4925d8515ad7a1bb10f9f9e2e3b\" returns successfully" Sep 5 23:55:29.812743 containerd[1481]: time="2025-09-05T23:55:29.812066495Z" level=info msg="RemovePodSandbox for \"94fa0d62ee559fe77a0206f17b71f60c8ecfb4925d8515ad7a1bb10f9f9e2e3b\"" Sep 5 23:55:29.812743 containerd[1481]: time="2025-09-05T23:55:29.812111735Z" level=info msg="Forcibly stopping sandbox \"94fa0d62ee559fe77a0206f17b71f60c8ecfb4925d8515ad7a1bb10f9f9e2e3b\"" Sep 5 23:55:29.812743 containerd[1481]: time="2025-09-05T23:55:29.812172055Z" level=info msg="TearDown network for sandbox \"94fa0d62ee559fe77a0206f17b71f60c8ecfb4925d8515ad7a1bb10f9f9e2e3b\" successfully" Sep 5 23:55:29.817963 containerd[1481]: time="2025-09-05T23:55:29.817368024Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"94fa0d62ee559fe77a0206f17b71f60c8ecfb4925d8515ad7a1bb10f9f9e2e3b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:55:29.817963 containerd[1481]: time="2025-09-05T23:55:29.817532505Z" level=info msg="RemovePodSandbox \"94fa0d62ee559fe77a0206f17b71f60c8ecfb4925d8515ad7a1bb10f9f9e2e3b\" returns successfully" Sep 5 23:55:29.818304 containerd[1481]: time="2025-09-05T23:55:29.818272306Z" level=info msg="StopPodSandbox for \"bc88078acab9915345648d8a24df1ef2dc9a212b6958b7012f3306ed51b7bef7\"" Sep 5 23:55:29.818400 containerd[1481]: time="2025-09-05T23:55:29.818376506Z" level=info msg="TearDown network for sandbox \"bc88078acab9915345648d8a24df1ef2dc9a212b6958b7012f3306ed51b7bef7\" successfully" Sep 5 23:55:29.818430 containerd[1481]: time="2025-09-05T23:55:29.818400426Z" level=info msg="StopPodSandbox for \"bc88078acab9915345648d8a24df1ef2dc9a212b6958b7012f3306ed51b7bef7\" returns successfully" Sep 5 23:55:29.818979 containerd[1481]: time="2025-09-05T23:55:29.818921147Z" level=info msg="RemovePodSandbox for \"bc88078acab9915345648d8a24df1ef2dc9a212b6958b7012f3306ed51b7bef7\"" Sep 5 23:55:29.819059 containerd[1481]: time="2025-09-05T23:55:29.818987427Z" level=info msg="Forcibly stopping sandbox \"bc88078acab9915345648d8a24df1ef2dc9a212b6958b7012f3306ed51b7bef7\"" Sep 5 23:55:29.819093 containerd[1481]: time="2025-09-05T23:55:29.819053307Z" level=info msg="TearDown network for sandbox \"bc88078acab9915345648d8a24df1ef2dc9a212b6958b7012f3306ed51b7bef7\" successfully" Sep 5 23:55:29.823069 containerd[1481]: time="2025-09-05T23:55:29.823000114Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bc88078acab9915345648d8a24df1ef2dc9a212b6958b7012f3306ed51b7bef7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:55:29.823069 containerd[1481]: time="2025-09-05T23:55:29.823075675Z" level=info msg="RemovePodSandbox \"bc88078acab9915345648d8a24df1ef2dc9a212b6958b7012f3306ed51b7bef7\" returns successfully" Sep 5 23:55:39.934292 systemd[1]: cri-containerd-6aa6fdc0c1ae9133fae8df42f43ab36f0f619b10aea6abc007bbf21ba96121b5.scope: Deactivated successfully. Sep 5 23:55:39.935462 systemd[1]: cri-containerd-6aa6fdc0c1ae9133fae8df42f43ab36f0f619b10aea6abc007bbf21ba96121b5.scope: Consumed 6.677s CPU time, 20.3M memory peak, 0B memory swap peak. Sep 5 23:55:39.961506 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6aa6fdc0c1ae9133fae8df42f43ab36f0f619b10aea6abc007bbf21ba96121b5-rootfs.mount: Deactivated successfully. Sep 5 23:55:39.969053 containerd[1481]: time="2025-09-05T23:55:39.968823851Z" level=info msg="shim disconnected" id=6aa6fdc0c1ae9133fae8df42f43ab36f0f619b10aea6abc007bbf21ba96121b5 namespace=k8s.io Sep 5 23:55:39.969053 containerd[1481]: time="2025-09-05T23:55:39.968876931Z" level=warning msg="cleaning up after shim disconnected" id=6aa6fdc0c1ae9133fae8df42f43ab36f0f619b10aea6abc007bbf21ba96121b5 namespace=k8s.io Sep 5 23:55:39.969053 containerd[1481]: time="2025-09-05T23:55:39.968884651Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:55:40.374691 kubelet[2515]: E0905 23:55:40.374124 2515 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:35262->10.0.0.2:2379: read: connection timed out" Sep 5 23:55:40.381336 systemd[1]: cri-containerd-681b2c9739729385921b4b3b20e6f9919a7eaa1e321124d9734d2395b9a27b82.scope: Deactivated successfully. Sep 5 23:55:40.382762 systemd[1]: cri-containerd-681b2c9739729385921b4b3b20e6f9919a7eaa1e321124d9734d2395b9a27b82.scope: Consumed 2.140s CPU time, 16.1M memory peak, 0B memory swap peak. Sep 5 23:55:40.420750 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-681b2c9739729385921b4b3b20e6f9919a7eaa1e321124d9734d2395b9a27b82-rootfs.mount: Deactivated successfully. Sep 5 23:55:40.424283 containerd[1481]: time="2025-09-05T23:55:40.424216781Z" level=info msg="shim disconnected" id=681b2c9739729385921b4b3b20e6f9919a7eaa1e321124d9734d2395b9a27b82 namespace=k8s.io Sep 5 23:55:40.424283 containerd[1481]: time="2025-09-05T23:55:40.424273621Z" level=warning msg="cleaning up after shim disconnected" id=681b2c9739729385921b4b3b20e6f9919a7eaa1e321124d9734d2395b9a27b82 namespace=k8s.io Sep 5 23:55:40.424283 containerd[1481]: time="2025-09-05T23:55:40.424282981Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:55:40.566159 kubelet[2515]: I0905 23:55:40.566122 2515 scope.go:117] "RemoveContainer" containerID="6aa6fdc0c1ae9133fae8df42f43ab36f0f619b10aea6abc007bbf21ba96121b5" Sep 5 23:55:40.568867 containerd[1481]: time="2025-09-05T23:55:40.568740918Z" level=info msg="CreateContainer within sandbox \"edc482bc996ba98821af19d571ce546f3d7bca448803e9cbabf555483fdde929\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 5 23:55:40.569101 kubelet[2515]: I0905 23:55:40.569003 2515 scope.go:117] "RemoveContainer" containerID="681b2c9739729385921b4b3b20e6f9919a7eaa1e321124d9734d2395b9a27b82" Sep 5 23:55:40.571471 containerd[1481]: time="2025-09-05T23:55:40.571348843Z" level=info msg="CreateContainer within sandbox \"bb5b162f858f450792ae88e016f5234ae98471fcab2e11294c0145741b242a6e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 5 23:55:40.596557 containerd[1481]: time="2025-09-05T23:55:40.596433288Z" level=info msg="CreateContainer within sandbox \"edc482bc996ba98821af19d571ce546f3d7bca448803e9cbabf555483fdde929\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"cbb1f9f2087f9ad01455659b99da4fa70629724c916ab00d2280e82b241ade1a\"" Sep 5 23:55:40.597263 containerd[1481]: time="2025-09-05T23:55:40.596952648Z" level=info msg="StartContainer for \"cbb1f9f2087f9ad01455659b99da4fa70629724c916ab00d2280e82b241ade1a\"" Sep 5 23:55:40.603438 containerd[1481]: time="2025-09-05T23:55:40.603259580Z" level=info msg="CreateContainer within sandbox \"bb5b162f858f450792ae88e016f5234ae98471fcab2e11294c0145741b242a6e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"9f63b07d7aa2004c465aa7cadd48f518c0864d47277930243be4ccce07c39b09\"" Sep 5 23:55:40.604079 containerd[1481]: time="2025-09-05T23:55:40.604024781Z" level=info msg="StartContainer for \"9f63b07d7aa2004c465aa7cadd48f518c0864d47277930243be4ccce07c39b09\"" Sep 5 23:55:40.632417 systemd[1]: Started cri-containerd-cbb1f9f2087f9ad01455659b99da4fa70629724c916ab00d2280e82b241ade1a.scope - libcontainer container cbb1f9f2087f9ad01455659b99da4fa70629724c916ab00d2280e82b241ade1a. Sep 5 23:55:40.642459 systemd[1]: Started cri-containerd-9f63b07d7aa2004c465aa7cadd48f518c0864d47277930243be4ccce07c39b09.scope - libcontainer container 9f63b07d7aa2004c465aa7cadd48f518c0864d47277930243be4ccce07c39b09. Sep 5 23:55:40.679799 containerd[1481]: time="2025-09-05T23:55:40.679679436Z" level=info msg="StartContainer for \"cbb1f9f2087f9ad01455659b99da4fa70629724c916ab00d2280e82b241ade1a\" returns successfully" Sep 5 23:55:40.691284 containerd[1481]: time="2025-09-05T23:55:40.691081936Z" level=info msg="StartContainer for \"9f63b07d7aa2004c465aa7cadd48f518c0864d47277930243be4ccce07c39b09\" returns successfully" Sep 5 23:55:44.214518 kubelet[2515]: E0905 23:55:44.214070 2515 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:35074->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-5-n-c970465010.1862882d871e63a1 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-5-n-c970465010,UID:853f5fc5f0ac2a688cacc3d70fe16159,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-n-c970465010,},FirstTimestamp:2025-09-05 23:55:33.749736353 +0000 UTC m=+244.080087762,LastTimestamp:2025-09-05 23:55:33.749736353 +0000 UTC m=+244.080087762,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-n-c970465010,}"