Sep 6 00:17:16.879413 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 6 00:17:16.879439 kernel: Linux version 6.6.103-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 5 22:30:47 -00 2025 Sep 6 00:17:16.879449 kernel: KASLR enabled Sep 6 00:17:16.879454 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Sep 6 00:17:16.879461 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Sep 6 00:17:16.879466 kernel: random: crng init done Sep 6 00:17:16.879473 kernel: ACPI: Early table checksum verification disabled Sep 6 00:17:16.879479 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Sep 6 00:17:16.879485 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Sep 6 00:17:16.879493 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:17:16.879499 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:17:16.879505 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:17:16.879510 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:17:16.879517 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:17:16.879524 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:17:16.879533 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:17:16.879539 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:17:16.879545 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:17:16.879552 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Sep 6 00:17:16.879558 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Sep 6 00:17:16.879564 kernel: NUMA: Failed to initialise from firmware Sep 6 00:17:16.879571 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Sep 6 00:17:16.879577 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Sep 6 00:17:16.880488 kernel: Zone ranges: Sep 6 00:17:16.880502 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 6 00:17:16.880514 kernel: DMA32 empty Sep 6 00:17:16.880521 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Sep 6 00:17:16.880527 kernel: Movable zone start for each node Sep 6 00:17:16.880534 kernel: Early memory node ranges Sep 6 00:17:16.880540 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Sep 6 00:17:16.880547 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Sep 6 00:17:16.880553 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Sep 6 00:17:16.880560 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Sep 6 00:17:16.880566 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Sep 6 00:17:16.880572 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Sep 6 00:17:16.880579 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Sep 6 00:17:16.880662 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Sep 6 00:17:16.880674 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Sep 6 00:17:16.880681 kernel: psci: probing for conduit method from ACPI. Sep 6 00:17:16.880687 kernel: psci: PSCIv1.1 detected in firmware. Sep 6 00:17:16.880696 kernel: psci: Using standard PSCI v0.2 function IDs Sep 6 00:17:16.880703 kernel: psci: Trusted OS migration not required Sep 6 00:17:16.880710 kernel: psci: SMC Calling Convention v1.1 Sep 6 00:17:16.880718 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 6 00:17:16.880725 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 6 00:17:16.880732 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 6 00:17:16.880739 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 6 00:17:16.880746 kernel: Detected PIPT I-cache on CPU0 Sep 6 00:17:16.880752 kernel: CPU features: detected: GIC system register CPU interface Sep 6 00:17:16.880759 kernel: CPU features: detected: Hardware dirty bit management Sep 6 00:17:16.880766 kernel: CPU features: detected: Spectre-v4 Sep 6 00:17:16.880773 kernel: CPU features: detected: Spectre-BHB Sep 6 00:17:16.880779 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 6 00:17:16.880788 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 6 00:17:16.880794 kernel: CPU features: detected: ARM erratum 1418040 Sep 6 00:17:16.880801 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 6 00:17:16.880808 kernel: alternatives: applying boot alternatives Sep 6 00:17:16.880816 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=ac831c89fe9ee7829b7371dadfb138f8d0e2b31ae3a5a920e0eba13bbab016c3 Sep 6 00:17:16.880823 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 6 00:17:16.880830 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 6 00:17:16.880837 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 6 00:17:16.880844 kernel: Fallback order for Node 0: 0 Sep 6 00:17:16.880851 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Sep 6 00:17:16.880857 kernel: Policy zone: Normal Sep 6 00:17:16.880866 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 6 00:17:16.880872 kernel: software IO TLB: area num 2. Sep 6 00:17:16.880884 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Sep 6 00:17:16.880895 kernel: Memory: 3882808K/4096000K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 213192K reserved, 0K cma-reserved) Sep 6 00:17:16.880904 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 6 00:17:16.880912 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 6 00:17:16.880920 kernel: rcu: RCU event tracing is enabled. Sep 6 00:17:16.880927 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 6 00:17:16.880934 kernel: Trampoline variant of Tasks RCU enabled. Sep 6 00:17:16.880941 kernel: Tracing variant of Tasks RCU enabled. Sep 6 00:17:16.880948 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 6 00:17:16.880956 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 6 00:17:16.880963 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 6 00:17:16.880969 kernel: GICv3: 256 SPIs implemented Sep 6 00:17:16.880976 kernel: GICv3: 0 Extended SPIs implemented Sep 6 00:17:16.880983 kernel: Root IRQ handler: gic_handle_irq Sep 6 00:17:16.880989 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 6 00:17:16.880997 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 6 00:17:16.881003 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 6 00:17:16.881010 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Sep 6 00:17:16.881017 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Sep 6 00:17:16.881024 kernel: GICv3: using LPI property table @0x00000001000e0000 Sep 6 00:17:16.881030 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Sep 6 00:17:16.881039 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 6 00:17:16.881046 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 6 00:17:16.881052 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 6 00:17:16.881059 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 6 00:17:16.881066 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 6 00:17:16.881073 kernel: Console: colour dummy device 80x25 Sep 6 00:17:16.881080 kernel: ACPI: Core revision 20230628 Sep 6 00:17:16.881087 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 6 00:17:16.881094 kernel: pid_max: default: 32768 minimum: 301 Sep 6 00:17:16.881101 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 6 00:17:16.881110 kernel: landlock: Up and running. Sep 6 00:17:16.881117 kernel: SELinux: Initializing. Sep 6 00:17:16.881124 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 6 00:17:16.881131 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 6 00:17:16.881138 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 6 00:17:16.881145 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 6 00:17:16.881152 kernel: rcu: Hierarchical SRCU implementation. Sep 6 00:17:16.881159 kernel: rcu: Max phase no-delay instances is 400. Sep 6 00:17:16.881166 kernel: Platform MSI: ITS@0x8080000 domain created Sep 6 00:17:16.881175 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 6 00:17:16.881182 kernel: Remapping and enabling EFI services. Sep 6 00:17:16.881189 kernel: smp: Bringing up secondary CPUs ... Sep 6 00:17:16.881196 kernel: Detected PIPT I-cache on CPU1 Sep 6 00:17:16.881203 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 6 00:17:16.881210 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Sep 6 00:17:16.881217 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 6 00:17:16.881225 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 6 00:17:16.881231 kernel: smp: Brought up 1 node, 2 CPUs Sep 6 00:17:16.881238 kernel: SMP: Total of 2 processors activated. Sep 6 00:17:16.881247 kernel: CPU features: detected: 32-bit EL0 Support Sep 6 00:17:16.881254 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 6 00:17:16.881281 kernel: CPU features: detected: Common not Private translations Sep 6 00:17:16.881291 kernel: CPU features: detected: CRC32 instructions Sep 6 00:17:16.881298 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 6 00:17:16.881306 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 6 00:17:16.881313 kernel: CPU features: detected: LSE atomic instructions Sep 6 00:17:16.881320 kernel: CPU features: detected: Privileged Access Never Sep 6 00:17:16.881327 kernel: CPU features: detected: RAS Extension Support Sep 6 00:17:16.881337 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 6 00:17:16.881344 kernel: CPU: All CPU(s) started at EL1 Sep 6 00:17:16.881351 kernel: alternatives: applying system-wide alternatives Sep 6 00:17:16.881359 kernel: devtmpfs: initialized Sep 6 00:17:16.881366 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 6 00:17:16.881374 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 6 00:17:16.881381 kernel: pinctrl core: initialized pinctrl subsystem Sep 6 00:17:16.881390 kernel: SMBIOS 3.0.0 present. Sep 6 00:17:16.881397 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Sep 6 00:17:16.881404 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 6 00:17:16.881412 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 6 00:17:16.881420 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 6 00:17:16.881428 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 6 00:17:16.881435 kernel: audit: initializing netlink subsys (disabled) Sep 6 00:17:16.881442 kernel: audit: type=2000 audit(0.014:1): state=initialized audit_enabled=0 res=1 Sep 6 00:17:16.881450 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 6 00:17:16.881459 kernel: cpuidle: using governor menu Sep 6 00:17:16.881467 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 6 00:17:16.881474 kernel: ASID allocator initialised with 32768 entries Sep 6 00:17:16.881482 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 6 00:17:16.881490 kernel: Serial: AMBA PL011 UART driver Sep 6 00:17:16.881497 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 6 00:17:16.881505 kernel: Modules: 0 pages in range for non-PLT usage Sep 6 00:17:16.881512 kernel: Modules: 509008 pages in range for PLT usage Sep 6 00:17:16.881520 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 6 00:17:16.881529 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 6 00:17:16.881536 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 6 00:17:16.881544 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 6 00:17:16.881551 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 6 00:17:16.881558 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 6 00:17:16.881565 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 6 00:17:16.881573 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 6 00:17:16.881580 kernel: ACPI: Added _OSI(Module Device) Sep 6 00:17:16.881625 kernel: ACPI: Added _OSI(Processor Device) Sep 6 00:17:16.881635 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 6 00:17:16.881643 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 6 00:17:16.881650 kernel: ACPI: Interpreter enabled Sep 6 00:17:16.881657 kernel: ACPI: Using GIC for interrupt routing Sep 6 00:17:16.881665 kernel: ACPI: MCFG table detected, 1 entries Sep 6 00:17:16.881672 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 6 00:17:16.881679 kernel: printk: console [ttyAMA0] enabled Sep 6 00:17:16.881687 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 6 00:17:16.881878 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 6 00:17:16.881960 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 6 00:17:16.882027 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 6 00:17:16.882091 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 6 00:17:16.882155 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 6 00:17:16.882164 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 6 00:17:16.882172 kernel: PCI host bridge to bus 0000:00 Sep 6 00:17:16.882244 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 6 00:17:16.882320 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 6 00:17:16.882381 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 6 00:17:16.882440 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 6 00:17:16.882523 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 6 00:17:16.884449 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Sep 6 00:17:16.884571 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Sep 6 00:17:16.884675 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Sep 6 00:17:16.884754 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Sep 6 00:17:16.884823 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Sep 6 00:17:16.884905 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Sep 6 00:17:16.884972 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Sep 6 00:17:16.885046 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Sep 6 00:17:16.885116 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Sep 6 00:17:16.885190 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Sep 6 00:17:16.885257 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Sep 6 00:17:16.885358 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Sep 6 00:17:16.885433 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Sep 6 00:17:16.885508 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Sep 6 00:17:16.885579 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Sep 6 00:17:16.887799 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Sep 6 00:17:16.887888 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Sep 6 00:17:16.887966 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Sep 6 00:17:16.888031 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Sep 6 00:17:16.888104 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Sep 6 00:17:16.888171 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Sep 6 00:17:16.888260 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Sep 6 00:17:16.888359 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Sep 6 00:17:16.888444 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Sep 6 00:17:16.888528 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Sep 6 00:17:16.889707 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 6 00:17:16.889797 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Sep 6 00:17:16.889883 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Sep 6 00:17:16.889953 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Sep 6 00:17:16.890463 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Sep 6 00:17:16.890711 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Sep 6 00:17:16.890799 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Sep 6 00:17:16.890879 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Sep 6 00:17:16.890948 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Sep 6 00:17:16.891034 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Sep 6 00:17:16.891103 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Sep 6 00:17:16.891179 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Sep 6 00:17:16.891251 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Sep 6 00:17:16.891371 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Sep 6 00:17:16.891453 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Sep 6 00:17:16.891527 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Sep 6 00:17:16.892069 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Sep 6 00:17:16.892154 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Sep 6 00:17:16.892227 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Sep 6 00:17:16.892312 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Sep 6 00:17:16.892382 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Sep 6 00:17:16.892460 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Sep 6 00:17:16.892527 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Sep 6 00:17:16.894746 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Sep 6 00:17:16.894853 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Sep 6 00:17:16.894921 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Sep 6 00:17:16.894986 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Sep 6 00:17:16.895056 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Sep 6 00:17:16.895132 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Sep 6 00:17:16.895207 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Sep 6 00:17:16.895307 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Sep 6 00:17:16.895378 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Sep 6 00:17:16.895444 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Sep 6 00:17:16.895515 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Sep 6 00:17:16.895581 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Sep 6 00:17:16.897878 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Sep 6 00:17:16.897965 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 6 00:17:16.898033 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Sep 6 00:17:16.898102 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Sep 6 00:17:16.898174 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 6 00:17:16.898241 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Sep 6 00:17:16.898366 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Sep 6 00:17:16.898445 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 6 00:17:16.898513 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Sep 6 00:17:16.898596 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Sep 6 00:17:16.898678 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Sep 6 00:17:16.898751 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Sep 6 00:17:16.898821 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Sep 6 00:17:16.898889 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Sep 6 00:17:16.898960 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Sep 6 00:17:16.899027 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Sep 6 00:17:16.899101 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Sep 6 00:17:16.899169 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Sep 6 00:17:16.899238 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Sep 6 00:17:16.899337 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Sep 6 00:17:16.899417 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Sep 6 00:17:16.899484 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 6 00:17:16.899557 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Sep 6 00:17:16.900505 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 6 00:17:16.901857 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Sep 6 00:17:16.901976 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 6 00:17:16.902049 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Sep 6 00:17:16.902115 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Sep 6 00:17:16.902186 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Sep 6 00:17:16.902273 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Sep 6 00:17:16.902349 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Sep 6 00:17:16.902417 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Sep 6 00:17:16.902486 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Sep 6 00:17:16.902552 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Sep 6 00:17:16.902631 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Sep 6 00:17:16.902700 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Sep 6 00:17:16.902767 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Sep 6 00:17:16.902837 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Sep 6 00:17:16.902906 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Sep 6 00:17:16.902972 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Sep 6 00:17:16.903042 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Sep 6 00:17:16.903108 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Sep 6 00:17:16.903175 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Sep 6 00:17:16.903242 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Sep 6 00:17:16.903321 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Sep 6 00:17:16.903393 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Sep 6 00:17:16.903467 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Sep 6 00:17:16.903533 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Sep 6 00:17:16.904757 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Sep 6 00:17:16.904864 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Sep 6 00:17:16.904939 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 6 00:17:16.905012 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Sep 6 00:17:16.905082 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Sep 6 00:17:16.905159 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Sep 6 00:17:16.905226 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Sep 6 00:17:16.905337 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Sep 6 00:17:16.905418 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Sep 6 00:17:16.905526 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Sep 6 00:17:16.907142 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Sep 6 00:17:16.907246 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Sep 6 00:17:16.907380 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Sep 6 00:17:16.907474 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Sep 6 00:17:16.907547 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Sep 6 00:17:16.907645 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Sep 6 00:17:16.907845 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Sep 6 00:17:16.907933 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Sep 6 00:17:16.908001 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Sep 6 00:17:16.908226 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Sep 6 00:17:16.908335 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Sep 6 00:17:16.908408 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Sep 6 00:17:16.908475 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Sep 6 00:17:16.908542 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Sep 6 00:17:16.908663 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Sep 6 00:17:16.908758 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Sep 6 00:17:16.908829 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Sep 6 00:17:16.908895 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Sep 6 00:17:16.908961 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Sep 6 00:17:16.909036 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Sep 6 00:17:16.909108 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Sep 6 00:17:16.909183 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Sep 6 00:17:16.909249 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Sep 6 00:17:16.909338 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Sep 6 00:17:16.909417 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 6 00:17:16.909495 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Sep 6 00:17:16.909566 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Sep 6 00:17:16.909679 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Sep 6 00:17:16.909752 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Sep 6 00:17:16.909817 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Sep 6 00:17:16.909882 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Sep 6 00:17:16.909957 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 6 00:17:16.910030 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Sep 6 00:17:16.910096 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Sep 6 00:17:16.910160 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Sep 6 00:17:16.910224 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 6 00:17:16.910350 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Sep 6 00:17:16.910422 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Sep 6 00:17:16.910486 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Sep 6 00:17:16.910555 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Sep 6 00:17:16.910842 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 6 00:17:16.910909 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 6 00:17:16.910967 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 6 00:17:16.911037 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Sep 6 00:17:16.911097 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Sep 6 00:17:16.911156 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Sep 6 00:17:16.911238 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Sep 6 00:17:16.911317 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Sep 6 00:17:16.911393 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Sep 6 00:17:16.911478 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Sep 6 00:17:16.911549 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Sep 6 00:17:16.911656 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Sep 6 00:17:16.911732 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Sep 6 00:17:16.911791 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Sep 6 00:17:16.911853 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Sep 6 00:17:16.911936 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Sep 6 00:17:16.912002 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Sep 6 00:17:16.912061 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Sep 6 00:17:16.912129 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Sep 6 00:17:16.912193 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Sep 6 00:17:16.912253 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 6 00:17:16.912379 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Sep 6 00:17:16.912442 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Sep 6 00:17:16.912507 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 6 00:17:16.912575 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Sep 6 00:17:16.912658 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Sep 6 00:17:16.912720 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 6 00:17:16.912796 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Sep 6 00:17:16.912856 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Sep 6 00:17:16.912917 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Sep 6 00:17:16.912929 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 6 00:17:16.912937 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 6 00:17:16.912945 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 6 00:17:16.912953 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 6 00:17:16.912962 kernel: iommu: Default domain type: Translated Sep 6 00:17:16.912969 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 6 00:17:16.912977 kernel: efivars: Registered efivars operations Sep 6 00:17:16.912985 kernel: vgaarb: loaded Sep 6 00:17:16.912993 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 6 00:17:16.913002 kernel: VFS: Disk quotas dquot_6.6.0 Sep 6 00:17:16.913010 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 6 00:17:16.913018 kernel: pnp: PnP ACPI init Sep 6 00:17:16.913096 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 6 00:17:16.913107 kernel: pnp: PnP ACPI: found 1 devices Sep 6 00:17:16.913115 kernel: NET: Registered PF_INET protocol family Sep 6 00:17:16.913123 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 6 00:17:16.913131 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 6 00:17:16.913142 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 6 00:17:16.913150 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 6 00:17:16.913157 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 6 00:17:16.913166 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 6 00:17:16.913173 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 6 00:17:16.913181 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 6 00:17:16.913189 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 6 00:17:16.913279 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Sep 6 00:17:16.913292 kernel: PCI: CLS 0 bytes, default 64 Sep 6 00:17:16.913302 kernel: kvm [1]: HYP mode not available Sep 6 00:17:16.913310 kernel: Initialise system trusted keyrings Sep 6 00:17:16.913318 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 6 00:17:16.913326 kernel: Key type asymmetric registered Sep 6 00:17:16.913334 kernel: Asymmetric key parser 'x509' registered Sep 6 00:17:16.913341 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 6 00:17:16.913349 kernel: io scheduler mq-deadline registered Sep 6 00:17:16.913357 kernel: io scheduler kyber registered Sep 6 00:17:16.913364 kernel: io scheduler bfq registered Sep 6 00:17:16.913374 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 6 00:17:16.913483 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Sep 6 00:17:16.913556 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Sep 6 00:17:16.913710 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 6 00:17:16.913785 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Sep 6 00:17:16.913851 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Sep 6 00:17:16.913922 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 6 00:17:16.914010 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Sep 6 00:17:16.914081 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Sep 6 00:17:16.914146 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 6 00:17:16.914214 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Sep 6 00:17:16.914294 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Sep 6 00:17:16.914369 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 6 00:17:16.914438 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Sep 6 00:17:16.914502 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Sep 6 00:17:16.914568 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 6 00:17:16.916706 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Sep 6 00:17:16.916795 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Sep 6 00:17:16.916870 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 6 00:17:16.916940 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Sep 6 00:17:16.917006 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Sep 6 00:17:16.917071 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 6 00:17:16.917164 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Sep 6 00:17:16.917243 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Sep 6 00:17:16.917372 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 6 00:17:16.917386 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Sep 6 00:17:16.917458 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Sep 6 00:17:16.917527 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Sep 6 00:17:16.917620 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 6 00:17:16.917635 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 6 00:17:16.917644 kernel: ACPI: button: Power Button [PWRB] Sep 6 00:17:16.917652 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 6 00:17:16.917729 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Sep 6 00:17:16.917804 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Sep 6 00:17:16.917815 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 6 00:17:16.917823 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 6 00:17:16.918026 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Sep 6 00:17:16.918042 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Sep 6 00:17:16.918050 kernel: thunder_xcv, ver 1.0 Sep 6 00:17:16.918058 kernel: thunder_bgx, ver 1.0 Sep 6 00:17:16.918071 kernel: nicpf, ver 1.0 Sep 6 00:17:16.918079 kernel: nicvf, ver 1.0 Sep 6 00:17:16.918166 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 6 00:17:16.918232 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-06T00:17:16 UTC (1757117836) Sep 6 00:17:16.918243 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 6 00:17:16.918251 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 6 00:17:16.918258 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 6 00:17:16.918276 kernel: watchdog: Hard watchdog permanently disabled Sep 6 00:17:16.918287 kernel: NET: Registered PF_INET6 protocol family Sep 6 00:17:16.918295 kernel: Segment Routing with IPv6 Sep 6 00:17:16.918303 kernel: In-situ OAM (IOAM) with IPv6 Sep 6 00:17:16.918310 kernel: NET: Registered PF_PACKET protocol family Sep 6 00:17:16.918318 kernel: Key type dns_resolver registered Sep 6 00:17:16.918326 kernel: registered taskstats version 1 Sep 6 00:17:16.918333 kernel: Loading compiled-in X.509 certificates Sep 6 00:17:16.918341 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.103-flatcar: 5b16e1dfa86dac534548885fd675b87757ff9e20' Sep 6 00:17:16.918349 kernel: Key type .fscrypt registered Sep 6 00:17:16.918358 kernel: Key type fscrypt-provisioning registered Sep 6 00:17:16.918366 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 6 00:17:16.918374 kernel: ima: Allocated hash algorithm: sha1 Sep 6 00:17:16.918382 kernel: ima: No architecture policies found Sep 6 00:17:16.918390 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 6 00:17:16.918398 kernel: clk: Disabling unused clocks Sep 6 00:17:16.918405 kernel: Freeing unused kernel memory: 39424K Sep 6 00:17:16.918413 kernel: Run /init as init process Sep 6 00:17:16.918421 kernel: with arguments: Sep 6 00:17:16.918430 kernel: /init Sep 6 00:17:16.918438 kernel: with environment: Sep 6 00:17:16.918445 kernel: HOME=/ Sep 6 00:17:16.918453 kernel: TERM=linux Sep 6 00:17:16.918460 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 6 00:17:16.918471 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 6 00:17:16.918481 systemd[1]: Detected virtualization kvm. Sep 6 00:17:16.918489 systemd[1]: Detected architecture arm64. Sep 6 00:17:16.918499 systemd[1]: Running in initrd. Sep 6 00:17:16.918507 systemd[1]: No hostname configured, using default hostname. Sep 6 00:17:16.918515 systemd[1]: Hostname set to . Sep 6 00:17:16.918524 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:17:16.918532 systemd[1]: Queued start job for default target initrd.target. Sep 6 00:17:16.918540 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 6 00:17:16.918549 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 6 00:17:16.918558 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 6 00:17:16.918569 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 6 00:17:16.918577 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 6 00:17:16.918657 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 6 00:17:16.918669 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 6 00:17:16.918677 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 6 00:17:16.918686 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 6 00:17:16.918698 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 6 00:17:16.918708 systemd[1]: Reached target paths.target - Path Units. Sep 6 00:17:16.918716 systemd[1]: Reached target slices.target - Slice Units. Sep 6 00:17:16.918724 systemd[1]: Reached target swap.target - Swaps. Sep 6 00:17:16.918732 systemd[1]: Reached target timers.target - Timer Units. Sep 6 00:17:16.918741 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 6 00:17:16.918749 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 6 00:17:16.918757 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 6 00:17:16.918766 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 6 00:17:16.918776 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 6 00:17:16.918784 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 6 00:17:16.918792 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 6 00:17:16.918800 systemd[1]: Reached target sockets.target - Socket Units. Sep 6 00:17:16.918808 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 6 00:17:16.918817 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 6 00:17:16.918825 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 6 00:17:16.918833 systemd[1]: Starting systemd-fsck-usr.service... Sep 6 00:17:16.918841 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 6 00:17:16.918851 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 6 00:17:16.918859 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 6 00:17:16.918868 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 6 00:17:16.918901 systemd-journald[237]: Collecting audit messages is disabled. Sep 6 00:17:16.918924 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 6 00:17:16.918933 systemd[1]: Finished systemd-fsck-usr.service. Sep 6 00:17:16.918942 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 6 00:17:16.918950 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 6 00:17:16.918960 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 6 00:17:16.918968 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 6 00:17:16.918976 kernel: Bridge firewalling registered Sep 6 00:17:16.918985 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 6 00:17:16.918994 systemd-journald[237]: Journal started Sep 6 00:17:16.919013 systemd-journald[237]: Runtime Journal (/run/log/journal/86b757b1460649d88393f9b9c636c949) is 8.0M, max 76.6M, 68.6M free. Sep 6 00:17:16.878930 systemd-modules-load[238]: Inserted module 'overlay' Sep 6 00:17:16.909111 systemd-modules-load[238]: Inserted module 'br_netfilter' Sep 6 00:17:16.922932 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 6 00:17:16.922982 systemd[1]: Started systemd-journald.service - Journal Service. Sep 6 00:17:16.924983 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 6 00:17:16.934882 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 6 00:17:16.939302 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 6 00:17:16.944160 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 6 00:17:16.953233 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 6 00:17:16.956811 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 6 00:17:16.974566 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 6 00:17:16.979628 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 6 00:17:16.987751 dracut-cmdline[267]: dracut-dracut-053 Sep 6 00:17:16.988668 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 6 00:17:16.989242 dracut-cmdline[267]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=ac831c89fe9ee7829b7371dadfb138f8d0e2b31ae3a5a920e0eba13bbab016c3 Sep 6 00:17:17.018715 systemd-resolved[276]: Positive Trust Anchors: Sep 6 00:17:17.018731 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:17:17.018763 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 6 00:17:17.029083 systemd-resolved[276]: Defaulting to hostname 'linux'. Sep 6 00:17:17.031346 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 6 00:17:17.032113 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 6 00:17:17.061684 kernel: SCSI subsystem initialized Sep 6 00:17:17.066637 kernel: Loading iSCSI transport class v2.0-870. Sep 6 00:17:17.075674 kernel: iscsi: registered transport (tcp) Sep 6 00:17:17.091100 kernel: iscsi: registered transport (qla4xxx) Sep 6 00:17:17.091162 kernel: QLogic iSCSI HBA Driver Sep 6 00:17:17.143468 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 6 00:17:17.149832 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 6 00:17:17.170710 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 6 00:17:17.170839 kernel: device-mapper: uevent: version 1.0.3 Sep 6 00:17:17.170866 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 6 00:17:17.221653 kernel: raid6: neonx8 gen() 15354 MB/s Sep 6 00:17:17.238646 kernel: raid6: neonx4 gen() 15432 MB/s Sep 6 00:17:17.255623 kernel: raid6: neonx2 gen() 13012 MB/s Sep 6 00:17:17.272625 kernel: raid6: neonx1 gen() 10357 MB/s Sep 6 00:17:17.289622 kernel: raid6: int64x8 gen() 6906 MB/s Sep 6 00:17:17.306646 kernel: raid6: int64x4 gen() 7161 MB/s Sep 6 00:17:17.323630 kernel: raid6: int64x2 gen() 6077 MB/s Sep 6 00:17:17.340640 kernel: raid6: int64x1 gen() 4984 MB/s Sep 6 00:17:17.340691 kernel: raid6: using algorithm neonx4 gen() 15432 MB/s Sep 6 00:17:17.357639 kernel: raid6: .... xor() 11982 MB/s, rmw enabled Sep 6 00:17:17.357684 kernel: raid6: using neon recovery algorithm Sep 6 00:17:17.362632 kernel: xor: measuring software checksum speed Sep 6 00:17:17.362683 kernel: 8regs : 19716 MB/sec Sep 6 00:17:17.362705 kernel: 32regs : 17344 MB/sec Sep 6 00:17:17.363619 kernel: arm64_neon : 26848 MB/sec Sep 6 00:17:17.363671 kernel: xor: using function: arm64_neon (26848 MB/sec) Sep 6 00:17:17.414627 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 6 00:17:17.429267 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 6 00:17:17.435989 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 6 00:17:17.452250 systemd-udevd[454]: Using default interface naming scheme 'v255'. Sep 6 00:17:17.455800 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 6 00:17:17.464831 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 6 00:17:17.479597 dracut-pre-trigger[455]: rd.md=0: removing MD RAID activation Sep 6 00:17:17.520670 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 6 00:17:17.525872 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 6 00:17:17.578380 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 6 00:17:17.590363 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 6 00:17:17.612685 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 6 00:17:17.613627 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 6 00:17:17.616394 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 6 00:17:17.617150 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 6 00:17:17.626888 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 6 00:17:17.645556 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 6 00:17:17.668629 kernel: scsi host0: Virtio SCSI HBA Sep 6 00:17:17.679623 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 6 00:17:17.679710 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Sep 6 00:17:17.707081 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 6 00:17:17.707200 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 6 00:17:17.710427 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 6 00:17:17.711078 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 6 00:17:17.711239 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 6 00:17:17.712313 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 6 00:17:17.720017 kernel: ACPI: bus type USB registered Sep 6 00:17:17.720081 kernel: usbcore: registered new interface driver usbfs Sep 6 00:17:17.720092 kernel: sr 0:0:0:0: Power-on or device reset occurred Sep 6 00:17:17.720719 kernel: usbcore: registered new interface driver hub Sep 6 00:17:17.721607 kernel: usbcore: registered new device driver usb Sep 6 00:17:17.722873 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 6 00:17:17.724610 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Sep 6 00:17:17.724811 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 6 00:17:17.727602 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Sep 6 00:17:17.746072 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 6 00:17:17.750633 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 6 00:17:17.750816 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Sep 6 00:17:17.754719 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Sep 6 00:17:17.754813 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 6 00:17:17.758618 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 6 00:17:17.758841 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Sep 6 00:17:17.761734 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Sep 6 00:17:17.761938 kernel: hub 1-0:1.0: USB hub found Sep 6 00:17:17.762626 kernel: hub 1-0:1.0: 4 ports detected Sep 6 00:17:17.763748 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Sep 6 00:17:17.764772 kernel: hub 2-0:1.0: USB hub found Sep 6 00:17:17.764925 kernel: hub 2-0:1.0: 4 ports detected Sep 6 00:17:17.773897 kernel: sd 0:0:0:1: Power-on or device reset occurred Sep 6 00:17:17.774092 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Sep 6 00:17:17.774185 kernel: sd 0:0:0:1: [sda] Write Protect is off Sep 6 00:17:17.774289 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Sep 6 00:17:17.774376 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 6 00:17:17.779628 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 6 00:17:17.779734 kernel: GPT:17805311 != 80003071 Sep 6 00:17:17.779747 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 6 00:17:17.780919 kernel: GPT:17805311 != 80003071 Sep 6 00:17:17.780960 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 6 00:17:17.780976 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 6 00:17:17.782965 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Sep 6 00:17:17.788777 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 6 00:17:17.830310 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Sep 6 00:17:17.837612 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (501) Sep 6 00:17:17.842615 kernel: BTRFS: device fsid 045c118e-b098-46f0-884a-43665575c70e devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (511) Sep 6 00:17:17.850305 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Sep 6 00:17:17.858118 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 6 00:17:17.863217 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Sep 6 00:17:17.866214 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Sep 6 00:17:17.878132 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 6 00:17:17.885267 disk-uuid[574]: Primary Header is updated. Sep 6 00:17:17.885267 disk-uuid[574]: Secondary Entries is updated. Sep 6 00:17:17.885267 disk-uuid[574]: Secondary Header is updated. Sep 6 00:17:17.893676 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 6 00:17:17.900706 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 6 00:17:17.904610 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 6 00:17:18.007630 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Sep 6 00:17:18.143111 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Sep 6 00:17:18.143196 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Sep 6 00:17:18.143664 kernel: usbcore: registered new interface driver usbhid Sep 6 00:17:18.143685 kernel: usbhid: USB HID core driver Sep 6 00:17:18.251237 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Sep 6 00:17:18.383454 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Sep 6 00:17:18.432648 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Sep 6 00:17:18.910043 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 6 00:17:18.910107 disk-uuid[575]: The operation has completed successfully. Sep 6 00:17:18.963826 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 6 00:17:18.963944 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 6 00:17:18.980901 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 6 00:17:18.986091 sh[593]: Success Sep 6 00:17:19.001624 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 6 00:17:19.076516 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 6 00:17:19.080756 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 6 00:17:19.081418 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 6 00:17:19.112608 kernel: BTRFS info (device dm-0): first mount of filesystem 045c118e-b098-46f0-884a-43665575c70e Sep 6 00:17:19.112697 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 6 00:17:19.112721 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 6 00:17:19.112742 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 6 00:17:19.112762 kernel: BTRFS info (device dm-0): using free space tree Sep 6 00:17:19.119631 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 6 00:17:19.121769 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 6 00:17:19.124429 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 6 00:17:19.129866 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 6 00:17:19.136842 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 6 00:17:19.146669 kernel: BTRFS info (device sda6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 6 00:17:19.146744 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 6 00:17:19.147621 kernel: BTRFS info (device sda6): using free space tree Sep 6 00:17:19.156637 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 6 00:17:19.156706 kernel: BTRFS info (device sda6): auto enabling async discard Sep 6 00:17:19.168196 kernel: BTRFS info (device sda6): last unmount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 6 00:17:19.167509 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 6 00:17:19.175562 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 6 00:17:19.182956 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 6 00:17:19.283399 ignition[688]: Ignition 2.19.0 Sep 6 00:17:19.283409 ignition[688]: Stage: fetch-offline Sep 6 00:17:19.283447 ignition[688]: no configs at "/usr/lib/ignition/base.d" Sep 6 00:17:19.283456 ignition[688]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 6 00:17:19.283662 ignition[688]: parsed url from cmdline: "" Sep 6 00:17:19.286685 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 6 00:17:19.283666 ignition[688]: no config URL provided Sep 6 00:17:19.283670 ignition[688]: reading system config file "/usr/lib/ignition/user.ign" Sep 6 00:17:19.283684 ignition[688]: no config at "/usr/lib/ignition/user.ign" Sep 6 00:17:19.283690 ignition[688]: failed to fetch config: resource requires networking Sep 6 00:17:19.283882 ignition[688]: Ignition finished successfully Sep 6 00:17:19.296151 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 6 00:17:19.303881 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 6 00:17:19.324205 systemd-networkd[781]: lo: Link UP Sep 6 00:17:19.324219 systemd-networkd[781]: lo: Gained carrier Sep 6 00:17:19.325973 systemd-networkd[781]: Enumeration completed Sep 6 00:17:19.326548 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 6 00:17:19.326552 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:17:19.326636 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 6 00:17:19.328013 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 6 00:17:19.328018 systemd-networkd[781]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:17:19.328463 systemd[1]: Reached target network.target - Network. Sep 6 00:17:19.329064 systemd-networkd[781]: eth0: Link UP Sep 6 00:17:19.329068 systemd-networkd[781]: eth0: Gained carrier Sep 6 00:17:19.329077 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 6 00:17:19.333960 systemd-networkd[781]: eth1: Link UP Sep 6 00:17:19.333964 systemd-networkd[781]: eth1: Gained carrier Sep 6 00:17:19.333975 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 6 00:17:19.337890 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 6 00:17:19.352033 ignition[783]: Ignition 2.19.0 Sep 6 00:17:19.352043 ignition[783]: Stage: fetch Sep 6 00:17:19.352228 ignition[783]: no configs at "/usr/lib/ignition/base.d" Sep 6 00:17:19.352253 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 6 00:17:19.352356 ignition[783]: parsed url from cmdline: "" Sep 6 00:17:19.352360 ignition[783]: no config URL provided Sep 6 00:17:19.352366 ignition[783]: reading system config file "/usr/lib/ignition/user.ign" Sep 6 00:17:19.352373 ignition[783]: no config at "/usr/lib/ignition/user.ign" Sep 6 00:17:19.352394 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Sep 6 00:17:19.353163 ignition[783]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Sep 6 00:17:19.369700 systemd-networkd[781]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 6 00:17:19.401699 systemd-networkd[781]: eth0: DHCPv4 address 91.98.90.164/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 6 00:17:19.553765 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Sep 6 00:17:19.558755 ignition[783]: GET result: OK Sep 6 00:17:19.558843 ignition[783]: parsing config with SHA512: e1f313f497b7a79792f553267c7e721742315bdd93b739a27d1f94b7061619e7e2ea956595936579b98bf8f6f961b496f36feb9553845ce4a1d2a80ad8f1fb8c Sep 6 00:17:19.564121 unknown[783]: fetched base config from "system" Sep 6 00:17:19.565043 ignition[783]: fetch: fetch complete Sep 6 00:17:19.564139 unknown[783]: fetched base config from "system" Sep 6 00:17:19.565049 ignition[783]: fetch: fetch passed Sep 6 00:17:19.564145 unknown[783]: fetched user config from "hetzner" Sep 6 00:17:19.565108 ignition[783]: Ignition finished successfully Sep 6 00:17:19.569201 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 6 00:17:19.577807 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 6 00:17:19.593089 ignition[790]: Ignition 2.19.0 Sep 6 00:17:19.593100 ignition[790]: Stage: kargs Sep 6 00:17:19.593339 ignition[790]: no configs at "/usr/lib/ignition/base.d" Sep 6 00:17:19.593350 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 6 00:17:19.594518 ignition[790]: kargs: kargs passed Sep 6 00:17:19.594607 ignition[790]: Ignition finished successfully Sep 6 00:17:19.597283 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 6 00:17:19.601821 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 6 00:17:19.622002 ignition[797]: Ignition 2.19.0 Sep 6 00:17:19.622684 ignition[797]: Stage: disks Sep 6 00:17:19.622891 ignition[797]: no configs at "/usr/lib/ignition/base.d" Sep 6 00:17:19.622902 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 6 00:17:19.623942 ignition[797]: disks: disks passed Sep 6 00:17:19.623997 ignition[797]: Ignition finished successfully Sep 6 00:17:19.626890 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 6 00:17:19.628572 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 6 00:17:19.629860 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 6 00:17:19.631338 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 6 00:17:19.631937 systemd[1]: Reached target sysinit.target - System Initialization. Sep 6 00:17:19.633302 systemd[1]: Reached target basic.target - Basic System. Sep 6 00:17:19.640875 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 6 00:17:19.656976 systemd-fsck[805]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 6 00:17:19.661025 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 6 00:17:19.666940 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 6 00:17:19.712994 kernel: EXT4-fs (sda9): mounted filesystem 72e55cb0-8368-4871-a3a0-8637412e72e8 r/w with ordered data mode. Quota mode: none. Sep 6 00:17:19.713600 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 6 00:17:19.714701 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 6 00:17:19.720757 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 6 00:17:19.724396 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 6 00:17:19.726280 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 6 00:17:19.728690 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 6 00:17:19.728726 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 6 00:17:19.737134 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 6 00:17:19.740010 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (813) Sep 6 00:17:19.740044 kernel: BTRFS info (device sda6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 6 00:17:19.740054 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 6 00:17:19.741164 kernel: BTRFS info (device sda6): using free space tree Sep 6 00:17:19.745860 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 6 00:17:19.745923 kernel: BTRFS info (device sda6): auto enabling async discard Sep 6 00:17:19.746166 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 6 00:17:19.751273 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 6 00:17:19.799322 coreos-metadata[815]: Sep 06 00:17:19.799 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Sep 6 00:17:19.802016 coreos-metadata[815]: Sep 06 00:17:19.801 INFO Fetch successful Sep 6 00:17:19.805413 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Sep 6 00:17:19.806343 coreos-metadata[815]: Sep 06 00:17:19.805 INFO wrote hostname ci-4081-3-5-n-5ce2877658 to /sysroot/etc/hostname Sep 6 00:17:19.807920 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 6 00:17:19.815838 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Sep 6 00:17:19.821460 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Sep 6 00:17:19.826385 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Sep 6 00:17:19.927972 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 6 00:17:19.936838 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 6 00:17:19.940790 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 6 00:17:19.949605 kernel: BTRFS info (device sda6): last unmount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 6 00:17:19.972728 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 6 00:17:19.979656 ignition[930]: INFO : Ignition 2.19.0 Sep 6 00:17:19.981615 ignition[930]: INFO : Stage: mount Sep 6 00:17:19.981615 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 6 00:17:19.981615 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 6 00:17:19.984285 ignition[930]: INFO : mount: mount passed Sep 6 00:17:19.984285 ignition[930]: INFO : Ignition finished successfully Sep 6 00:17:19.984800 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 6 00:17:19.995823 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 6 00:17:20.112994 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 6 00:17:20.122920 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 6 00:17:20.133995 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (941) Sep 6 00:17:20.134073 kernel: BTRFS info (device sda6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 6 00:17:20.134096 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 6 00:17:20.134883 kernel: BTRFS info (device sda6): using free space tree Sep 6 00:17:20.137615 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 6 00:17:20.137663 kernel: BTRFS info (device sda6): auto enabling async discard Sep 6 00:17:20.141860 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 6 00:17:20.173166 ignition[958]: INFO : Ignition 2.19.0 Sep 6 00:17:20.173166 ignition[958]: INFO : Stage: files Sep 6 00:17:20.173166 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 6 00:17:20.173166 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 6 00:17:20.177159 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Sep 6 00:17:20.177159 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 6 00:17:20.177159 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 6 00:17:20.180170 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 6 00:17:20.180170 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 6 00:17:20.182615 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 6 00:17:20.181347 unknown[958]: wrote ssh authorized keys file for user: core Sep 6 00:17:20.184207 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 6 00:17:20.184207 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 6 00:17:20.184207 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 6 00:17:20.184207 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 6 00:17:20.355686 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 6 00:17:20.725637 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 6 00:17:20.726904 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 6 00:17:20.726904 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 6 00:17:20.937035 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Sep 6 00:17:21.112671 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 6 00:17:21.112671 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Sep 6 00:17:21.112671 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Sep 6 00:17:21.112671 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 6 00:17:21.112671 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 6 00:17:21.112671 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 00:17:21.112671 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 00:17:21.112671 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 00:17:21.112671 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 00:17:21.121259 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:17:21.121259 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:17:21.121259 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 6 00:17:21.121259 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 6 00:17:21.121259 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 6 00:17:21.121259 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 6 00:17:21.214979 systemd-networkd[781]: eth0: Gained IPv6LL Sep 6 00:17:21.215608 systemd-networkd[781]: eth1: Gained IPv6LL Sep 6 00:17:21.369431 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Sep 6 00:17:21.546986 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 6 00:17:21.546986 ignition[958]: INFO : files: op(d): [started] processing unit "containerd.service" Sep 6 00:17:21.550017 ignition[958]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 6 00:17:21.550017 ignition[958]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 6 00:17:21.550017 ignition[958]: INFO : files: op(d): [finished] processing unit "containerd.service" Sep 6 00:17:21.550017 ignition[958]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Sep 6 00:17:21.557910 ignition[958]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 00:17:21.557910 ignition[958]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 00:17:21.557910 ignition[958]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Sep 6 00:17:21.557910 ignition[958]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Sep 6 00:17:21.557910 ignition[958]: INFO : files: op(11): op(12): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 6 00:17:21.557910 ignition[958]: INFO : files: op(11): op(12): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 6 00:17:21.557910 ignition[958]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Sep 6 00:17:21.557910 ignition[958]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Sep 6 00:17:21.557910 ignition[958]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Sep 6 00:17:21.557910 ignition[958]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:17:21.557910 ignition[958]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:17:21.557910 ignition[958]: INFO : files: files passed Sep 6 00:17:21.557910 ignition[958]: INFO : Ignition finished successfully Sep 6 00:17:21.557896 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 6 00:17:21.565844 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 6 00:17:21.570867 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 6 00:17:21.572791 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 6 00:17:21.572891 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 6 00:17:21.586925 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 6 00:17:21.586925 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 6 00:17:21.590685 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 6 00:17:21.592938 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 6 00:17:21.593860 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 6 00:17:21.599865 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 6 00:17:21.628114 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 6 00:17:21.628320 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 6 00:17:21.630699 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 6 00:17:21.631801 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 6 00:17:21.632872 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 6 00:17:21.637856 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 6 00:17:21.653576 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 6 00:17:21.660853 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 6 00:17:21.675148 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 6 00:17:21.676683 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 6 00:17:21.678157 systemd[1]: Stopped target timers.target - Timer Units. Sep 6 00:17:21.679348 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 6 00:17:21.680116 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 6 00:17:21.682290 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 6 00:17:21.682951 systemd[1]: Stopped target basic.target - Basic System. Sep 6 00:17:21.684816 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 6 00:17:21.686447 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 6 00:17:21.688272 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 6 00:17:21.689605 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 6 00:17:21.691026 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 6 00:17:21.692265 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 6 00:17:21.693514 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 6 00:17:21.694508 systemd[1]: Stopped target swap.target - Swaps. Sep 6 00:17:21.695425 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 6 00:17:21.695555 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 6 00:17:21.696985 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 6 00:17:21.697662 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 6 00:17:21.698754 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 6 00:17:21.698833 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 6 00:17:21.699953 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 6 00:17:21.700077 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 6 00:17:21.701709 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 6 00:17:21.701835 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 6 00:17:21.703358 systemd[1]: ignition-files.service: Deactivated successfully. Sep 6 00:17:21.703463 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 6 00:17:21.704332 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 6 00:17:21.704430 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 6 00:17:21.713974 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 6 00:17:21.719517 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 6 00:17:21.721714 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 6 00:17:21.721903 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 6 00:17:21.723033 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 6 00:17:21.723228 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 6 00:17:21.734368 ignition[1011]: INFO : Ignition 2.19.0 Sep 6 00:17:21.734368 ignition[1011]: INFO : Stage: umount Sep 6 00:17:21.734368 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 6 00:17:21.734368 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 6 00:17:21.737948 ignition[1011]: INFO : umount: umount passed Sep 6 00:17:21.737948 ignition[1011]: INFO : Ignition finished successfully Sep 6 00:17:21.738080 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 6 00:17:21.738173 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 6 00:17:21.741451 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 6 00:17:21.741662 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 6 00:17:21.742552 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 6 00:17:21.742630 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 6 00:17:21.744642 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 6 00:17:21.744705 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 6 00:17:21.747679 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 6 00:17:21.747754 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 6 00:17:21.749983 systemd[1]: Stopped target network.target - Network. Sep 6 00:17:21.752726 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 6 00:17:21.752819 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 6 00:17:21.760389 systemd[1]: Stopped target paths.target - Path Units. Sep 6 00:17:21.765551 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 6 00:17:21.765691 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 6 00:17:21.769644 systemd[1]: Stopped target slices.target - Slice Units. Sep 6 00:17:21.773952 systemd[1]: Stopped target sockets.target - Socket Units. Sep 6 00:17:21.776387 systemd[1]: iscsid.socket: Deactivated successfully. Sep 6 00:17:21.776437 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 6 00:17:21.777238 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 6 00:17:21.777280 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 6 00:17:21.786030 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 6 00:17:21.786111 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 6 00:17:21.787261 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 6 00:17:21.787353 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 6 00:17:21.799481 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 6 00:17:21.801118 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 6 00:17:21.803045 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 6 00:17:21.807677 systemd-networkd[781]: eth0: DHCPv6 lease lost Sep 6 00:17:21.808996 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 6 00:17:21.810642 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 6 00:17:21.811726 systemd-networkd[781]: eth1: DHCPv6 lease lost Sep 6 00:17:21.813402 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 6 00:17:21.813534 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 6 00:17:21.815678 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 6 00:17:21.815801 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 6 00:17:21.817509 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 6 00:17:21.817564 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 6 00:17:21.818552 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 6 00:17:21.818631 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 6 00:17:21.827919 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 6 00:17:21.828999 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 6 00:17:21.829115 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 6 00:17:21.831568 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 00:17:21.831651 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 6 00:17:21.832405 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 6 00:17:21.832453 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 6 00:17:21.833170 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 6 00:17:21.833258 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 6 00:17:21.834821 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 6 00:17:21.849563 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 6 00:17:21.850866 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 6 00:17:21.853273 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 6 00:17:21.853424 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 6 00:17:21.855503 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 6 00:17:21.855605 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 6 00:17:21.856279 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 6 00:17:21.856316 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 6 00:17:21.857219 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 6 00:17:21.857276 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 6 00:17:21.858835 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 6 00:17:21.858880 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 6 00:17:21.860215 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 6 00:17:21.860262 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 6 00:17:21.872809 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 6 00:17:21.873564 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 6 00:17:21.873674 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 6 00:17:21.877386 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 6 00:17:21.877455 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 6 00:17:21.880770 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 6 00:17:21.880829 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 6 00:17:21.882280 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 6 00:17:21.882321 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 6 00:17:21.885768 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 6 00:17:21.885865 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 6 00:17:21.887286 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 6 00:17:21.898950 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 6 00:17:21.916726 systemd[1]: Switching root. Sep 6 00:17:21.955564 systemd-journald[237]: Journal stopped Sep 6 00:17:22.869655 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Sep 6 00:17:22.870710 kernel: SELinux: policy capability network_peer_controls=1 Sep 6 00:17:22.870741 kernel: SELinux: policy capability open_perms=1 Sep 6 00:17:22.870751 kernel: SELinux: policy capability extended_socket_class=1 Sep 6 00:17:22.870767 kernel: SELinux: policy capability always_check_network=0 Sep 6 00:17:22.870777 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 6 00:17:22.870787 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 6 00:17:22.870802 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 6 00:17:22.870811 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 6 00:17:22.870821 kernel: audit: type=1403 audit(1757117842.132:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 6 00:17:22.870833 systemd[1]: Successfully loaded SELinux policy in 34.556ms. Sep 6 00:17:22.870858 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.046ms. Sep 6 00:17:22.870870 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 6 00:17:22.870885 systemd[1]: Detected virtualization kvm. Sep 6 00:17:22.870895 systemd[1]: Detected architecture arm64. Sep 6 00:17:22.870906 systemd[1]: Detected first boot. Sep 6 00:17:22.870916 systemd[1]: Hostname set to . Sep 6 00:17:22.870926 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:17:22.870936 zram_generator::config[1074]: No configuration found. Sep 6 00:17:22.870949 systemd[1]: Populated /etc with preset unit settings. Sep 6 00:17:22.870960 systemd[1]: Queued start job for default target multi-user.target. Sep 6 00:17:22.870970 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 6 00:17:22.870982 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 6 00:17:22.870992 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 6 00:17:22.871002 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 6 00:17:22.871012 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 6 00:17:22.871022 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 6 00:17:22.871035 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 6 00:17:22.871045 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 6 00:17:22.871055 systemd[1]: Created slice user.slice - User and Session Slice. Sep 6 00:17:22.871065 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 6 00:17:22.871076 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 6 00:17:22.871086 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 6 00:17:22.871096 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 6 00:17:22.871108 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 6 00:17:22.871118 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 6 00:17:22.871130 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 6 00:17:22.871141 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 6 00:17:22.871151 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 6 00:17:22.871161 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 6 00:17:22.871172 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 6 00:17:22.871182 systemd[1]: Reached target slices.target - Slice Units. Sep 6 00:17:22.871208 systemd[1]: Reached target swap.target - Swaps. Sep 6 00:17:22.871225 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 6 00:17:22.871235 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 6 00:17:22.871245 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 6 00:17:22.871255 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 6 00:17:22.871266 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 6 00:17:22.871276 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 6 00:17:22.871287 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 6 00:17:22.871297 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 6 00:17:22.871307 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 6 00:17:22.871319 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 6 00:17:22.871333 systemd[1]: Mounting media.mount - External Media Directory... Sep 6 00:17:22.871345 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 6 00:17:22.871358 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 6 00:17:22.871369 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 6 00:17:22.871379 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 6 00:17:22.871391 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 6 00:17:22.871403 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 6 00:17:22.871413 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 6 00:17:22.871424 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 6 00:17:22.871434 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 6 00:17:22.871444 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 6 00:17:22.871454 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 6 00:17:22.871464 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 6 00:17:22.871477 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 6 00:17:22.871487 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 6 00:17:22.871498 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Sep 6 00:17:22.871508 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 6 00:17:22.871518 kernel: loop: module loaded Sep 6 00:17:22.871528 kernel: ACPI: bus type drm_connector registered Sep 6 00:17:22.871537 kernel: fuse: init (API version 7.39) Sep 6 00:17:22.871548 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 6 00:17:22.871560 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 6 00:17:22.871623 systemd-journald[1156]: Collecting audit messages is disabled. Sep 6 00:17:22.871656 systemd-journald[1156]: Journal started Sep 6 00:17:22.871680 systemd-journald[1156]: Runtime Journal (/run/log/journal/86b757b1460649d88393f9b9c636c949) is 8.0M, max 76.6M, 68.6M free. Sep 6 00:17:22.871727 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 6 00:17:22.878876 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 6 00:17:22.884430 systemd[1]: Started systemd-journald.service - Journal Service. Sep 6 00:17:22.883765 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 6 00:17:22.884926 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 6 00:17:22.885725 systemd[1]: Mounted media.mount - External Media Directory. Sep 6 00:17:22.886452 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 6 00:17:22.887447 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 6 00:17:22.890360 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 6 00:17:22.893112 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 6 00:17:22.894092 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 6 00:17:22.894311 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 6 00:17:22.895247 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:17:22.895394 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 6 00:17:22.896376 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:17:22.896521 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 6 00:17:22.897711 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:17:22.897855 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 6 00:17:22.898971 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 6 00:17:22.899114 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 6 00:17:22.900348 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:17:22.902879 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 6 00:17:22.903978 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 6 00:17:22.906237 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 6 00:17:22.908507 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 6 00:17:22.909574 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 6 00:17:22.922532 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 6 00:17:22.928819 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 6 00:17:22.933745 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 6 00:17:22.934405 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 6 00:17:22.944862 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 6 00:17:22.949180 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 6 00:17:22.950599 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:17:22.955817 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 6 00:17:22.960296 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 6 00:17:22.964109 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 6 00:17:22.978853 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 6 00:17:22.985648 systemd-journald[1156]: Time spent on flushing to /var/log/journal/86b757b1460649d88393f9b9c636c949 is 48.993ms for 1116 entries. Sep 6 00:17:22.985648 systemd-journald[1156]: System Journal (/var/log/journal/86b757b1460649d88393f9b9c636c949) is 8.0M, max 584.8M, 576.8M free. Sep 6 00:17:23.039856 systemd-journald[1156]: Received client request to flush runtime journal. Sep 6 00:17:22.987385 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 6 00:17:22.990724 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 6 00:17:23.013234 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 6 00:17:23.014746 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 6 00:17:23.036121 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 6 00:17:23.052132 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 6 00:17:23.060718 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 6 00:17:23.063454 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Sep 6 00:17:23.063811 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Sep 6 00:17:23.073430 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 6 00:17:23.077179 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 6 00:17:23.080794 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 6 00:17:23.098643 udevadm[1224]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 6 00:17:23.119168 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 6 00:17:23.126937 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 6 00:17:23.142009 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. Sep 6 00:17:23.142350 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. Sep 6 00:17:23.149105 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 6 00:17:23.527257 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 6 00:17:23.537891 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 6 00:17:23.560044 systemd-udevd[1237]: Using default interface naming scheme 'v255'. Sep 6 00:17:23.580268 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 6 00:17:23.596407 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 6 00:17:23.632575 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 6 00:17:23.681508 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Sep 6 00:17:23.732575 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 6 00:17:23.817948 systemd-networkd[1245]: lo: Link UP Sep 6 00:17:23.818314 systemd-networkd[1245]: lo: Gained carrier Sep 6 00:17:23.821864 systemd-networkd[1245]: Enumeration completed Sep 6 00:17:23.822069 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 6 00:17:23.824913 systemd-networkd[1245]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 6 00:17:23.824920 systemd-networkd[1245]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:17:23.829771 systemd-networkd[1245]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 6 00:17:23.830925 systemd-networkd[1245]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:17:23.832397 systemd-networkd[1245]: eth0: Link UP Sep 6 00:17:23.832488 systemd-networkd[1245]: eth0: Gained carrier Sep 6 00:17:23.837794 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 6 00:17:23.840632 systemd-networkd[1245]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 6 00:17:23.852252 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1244) Sep 6 00:17:23.852323 kernel: mousedev: PS/2 mouse device common for all mice Sep 6 00:17:23.852840 systemd-networkd[1245]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 6 00:17:23.853640 systemd-networkd[1245]: eth1: Link UP Sep 6 00:17:23.853646 systemd-networkd[1245]: eth1: Gained carrier Sep 6 00:17:23.853661 systemd-networkd[1245]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 6 00:17:23.869291 systemd[1]: Condition check resulted in dev-vport2p1.device - /dev/vport2p1 being skipped. Sep 6 00:17:23.869977 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Sep 6 00:17:23.870286 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 6 00:17:23.877750 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 6 00:17:23.882896 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 6 00:17:23.890993 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 6 00:17:23.893688 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 6 00:17:23.893736 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 6 00:17:23.908336 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:17:23.908785 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 6 00:17:23.910817 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:17:23.910987 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 6 00:17:23.913042 systemd-networkd[1245]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 6 00:17:23.923083 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:17:23.923793 systemd-networkd[1245]: eth0: DHCPv4 address 91.98.90.164/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 6 00:17:23.924005 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 6 00:17:23.955600 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Sep 6 00:17:23.955703 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 6 00:17:23.955718 kernel: [drm] features: -context_init Sep 6 00:17:23.955729 kernel: [drm] number of scanouts: 1 Sep 6 00:17:23.955740 kernel: [drm] number of cap sets: 0 Sep 6 00:17:23.959397 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:17:23.961255 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 6 00:17:23.966609 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Sep 6 00:17:23.970010 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 6 00:17:23.978603 kernel: Console: switching to colour frame buffer device 160x50 Sep 6 00:17:23.982432 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 6 00:17:23.987647 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 6 00:17:23.998464 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 6 00:17:23.998853 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 6 00:17:24.005076 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 6 00:17:24.067705 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 6 00:17:24.104399 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 6 00:17:24.114910 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 6 00:17:24.128638 lvm[1309]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:17:24.153920 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 6 00:17:24.155823 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 6 00:17:24.167905 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 6 00:17:24.172503 lvm[1312]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:17:24.198209 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 6 00:17:24.200626 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 6 00:17:24.202410 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 6 00:17:24.202466 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 6 00:17:24.204096 systemd[1]: Reached target machines.target - Containers. Sep 6 00:17:24.205790 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 6 00:17:24.211890 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 6 00:17:24.215793 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 6 00:17:24.217860 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 6 00:17:24.221039 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 6 00:17:24.226804 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 6 00:17:24.242126 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 6 00:17:24.245994 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 6 00:17:24.262503 kernel: loop0: detected capacity change from 0 to 203944 Sep 6 00:17:24.267617 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 6 00:17:24.268421 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 6 00:17:24.273440 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 6 00:17:24.290617 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 6 00:17:24.326079 kernel: loop1: detected capacity change from 0 to 8 Sep 6 00:17:24.352781 kernel: loop2: detected capacity change from 0 to 114432 Sep 6 00:17:24.388069 kernel: loop3: detected capacity change from 0 to 114328 Sep 6 00:17:24.426641 kernel: loop4: detected capacity change from 0 to 203944 Sep 6 00:17:24.441624 kernel: loop5: detected capacity change from 0 to 8 Sep 6 00:17:24.442725 kernel: loop6: detected capacity change from 0 to 114432 Sep 6 00:17:24.458639 kernel: loop7: detected capacity change from 0 to 114328 Sep 6 00:17:24.467944 (sd-merge)[1334]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Sep 6 00:17:24.468446 (sd-merge)[1334]: Merged extensions into '/usr'. Sep 6 00:17:24.473785 systemd[1]: Reloading requested from client PID 1320 ('systemd-sysext') (unit systemd-sysext.service)... Sep 6 00:17:24.473799 systemd[1]: Reloading... Sep 6 00:17:24.561945 zram_generator::config[1362]: No configuration found. Sep 6 00:17:24.671662 ldconfig[1316]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 6 00:17:24.691715 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:17:24.751425 systemd[1]: Reloading finished in 277 ms. Sep 6 00:17:24.768517 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 6 00:17:24.769902 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 6 00:17:24.780914 systemd[1]: Starting ensure-sysext.service... Sep 6 00:17:24.785853 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 6 00:17:24.793734 systemd[1]: Reloading requested from client PID 1406 ('systemctl') (unit ensure-sysext.service)... Sep 6 00:17:24.793756 systemd[1]: Reloading... Sep 6 00:17:24.824386 systemd-tmpfiles[1407]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 6 00:17:24.825098 systemd-tmpfiles[1407]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 6 00:17:24.827183 systemd-tmpfiles[1407]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 6 00:17:24.827633 systemd-tmpfiles[1407]: ACLs are not supported, ignoring. Sep 6 00:17:24.827772 systemd-tmpfiles[1407]: ACLs are not supported, ignoring. Sep 6 00:17:24.831990 systemd-tmpfiles[1407]: Detected autofs mount point /boot during canonicalization of boot. Sep 6 00:17:24.832145 systemd-tmpfiles[1407]: Skipping /boot Sep 6 00:17:24.844080 systemd-tmpfiles[1407]: Detected autofs mount point /boot during canonicalization of boot. Sep 6 00:17:24.844664 systemd-tmpfiles[1407]: Skipping /boot Sep 6 00:17:24.890739 zram_generator::config[1435]: No configuration found. Sep 6 00:17:25.007439 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:17:25.073997 systemd[1]: Reloading finished in 279 ms. Sep 6 00:17:25.098299 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 6 00:17:25.114540 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 6 00:17:25.117854 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 6 00:17:25.124539 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 6 00:17:25.129899 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 6 00:17:25.144240 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 6 00:17:25.156925 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 6 00:17:25.170846 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 6 00:17:25.175830 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 6 00:17:25.187020 augenrules[1503]: No rules Sep 6 00:17:25.187520 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 6 00:17:25.190910 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 6 00:17:25.192340 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 6 00:17:25.194886 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 6 00:17:25.198830 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:17:25.199175 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 6 00:17:25.201290 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:17:25.201460 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 6 00:17:25.208267 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:17:25.210856 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 6 00:17:25.217807 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 6 00:17:25.226278 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 6 00:17:25.239051 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 6 00:17:25.245392 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 6 00:17:25.248906 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 6 00:17:25.260884 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 6 00:17:25.267200 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 6 00:17:25.271225 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:17:25.271406 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 6 00:17:25.275377 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:17:25.275554 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 6 00:17:25.276928 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:17:25.277853 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 6 00:17:25.281293 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 6 00:17:25.287441 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:17:25.289039 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 6 00:17:25.289389 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:17:25.294299 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 6 00:17:25.301310 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 6 00:17:25.307058 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 6 00:17:25.313114 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 6 00:17:25.321312 systemd-resolved[1485]: Positive Trust Anchors: Sep 6 00:17:25.321335 systemd-resolved[1485]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:17:25.321368 systemd-resolved[1485]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 6 00:17:25.324810 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 6 00:17:25.327551 systemd-resolved[1485]: Using system hostname 'ci-4081-3-5-n-5ce2877658'. Sep 6 00:17:25.335944 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 6 00:17:25.337056 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 6 00:17:25.337252 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:17:25.338232 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 6 00:17:25.339705 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:17:25.339915 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 6 00:17:25.340943 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:17:25.341106 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 6 00:17:25.342299 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:17:25.342461 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 6 00:17:25.343856 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:17:25.344107 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 6 00:17:25.350523 systemd[1]: Finished ensure-sysext.service. Sep 6 00:17:25.352773 systemd[1]: Reached target network.target - Network. Sep 6 00:17:25.353340 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 6 00:17:25.354035 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:17:25.354092 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 6 00:17:25.358781 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 6 00:17:25.418102 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 6 00:17:25.421130 systemd[1]: Reached target sysinit.target - System Initialization. Sep 6 00:17:25.422009 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 6 00:17:25.422787 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 6 00:17:25.423570 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 6 00:17:25.424299 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 6 00:17:25.424337 systemd[1]: Reached target paths.target - Path Units. Sep 6 00:17:25.424947 systemd[1]: Reached target time-set.target - System Time Set. Sep 6 00:17:25.425838 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 6 00:17:25.426534 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 6 00:17:25.427336 systemd[1]: Reached target timers.target - Timer Units. Sep 6 00:17:25.428877 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 6 00:17:25.431043 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 6 00:17:25.433252 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 6 00:17:25.437291 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 6 00:17:25.438816 systemd[1]: Reached target sockets.target - Socket Units. Sep 6 00:17:25.440000 systemd[1]: Reached target basic.target - Basic System. Sep 6 00:17:25.441659 systemd[1]: System is tainted: cgroupsv1 Sep 6 00:17:25.441749 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 6 00:17:25.441801 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 6 00:17:25.444117 systemd[1]: Starting containerd.service - containerd container runtime... Sep 6 00:17:25.851587 systemd-resolved[1485]: Clock change detected. Flushing caches. Sep 6 00:17:25.851697 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 6 00:17:25.851895 systemd-timesyncd[1552]: Contacted time server 185.252.140.126:123 (0.flatcar.pool.ntp.org). Sep 6 00:17:25.851950 systemd-timesyncd[1552]: Initial clock synchronization to Sat 2025-09-06 00:17:25.851545 UTC. Sep 6 00:17:25.855590 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 6 00:17:25.859757 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 6 00:17:25.871655 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 6 00:17:25.873611 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 6 00:17:25.881658 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 6 00:17:25.893552 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 6 00:17:25.901399 jq[1560]: false Sep 6 00:17:25.904625 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Sep 6 00:17:25.904918 extend-filesystems[1563]: Found loop4 Sep 6 00:17:25.904918 extend-filesystems[1563]: Found loop5 Sep 6 00:17:25.904918 extend-filesystems[1563]: Found loop6 Sep 6 00:17:25.904918 extend-filesystems[1563]: Found loop7 Sep 6 00:17:25.904918 extend-filesystems[1563]: Found sda Sep 6 00:17:25.904918 extend-filesystems[1563]: Found sda1 Sep 6 00:17:25.904918 extend-filesystems[1563]: Found sda2 Sep 6 00:17:25.904918 extend-filesystems[1563]: Found sda3 Sep 6 00:17:25.904918 extend-filesystems[1563]: Found usr Sep 6 00:17:25.904918 extend-filesystems[1563]: Found sda4 Sep 6 00:17:25.904918 extend-filesystems[1563]: Found sda6 Sep 6 00:17:25.904918 extend-filesystems[1563]: Found sda7 Sep 6 00:17:25.904918 extend-filesystems[1563]: Found sda9 Sep 6 00:17:25.904918 extend-filesystems[1563]: Checking size of /dev/sda9 Sep 6 00:17:25.907615 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 6 00:17:25.947753 coreos-metadata[1557]: Sep 06 00:17:25.925 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Sep 6 00:17:25.947753 coreos-metadata[1557]: Sep 06 00:17:25.927 INFO Fetch successful Sep 6 00:17:25.947753 coreos-metadata[1557]: Sep 06 00:17:25.930 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Sep 6 00:17:25.947753 coreos-metadata[1557]: Sep 06 00:17:25.930 INFO Fetch successful Sep 6 00:17:25.926162 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 6 00:17:25.944044 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 6 00:17:25.951980 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 6 00:17:25.960340 systemd[1]: Starting update-engine.service - Update Engine... Sep 6 00:17:25.963916 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 6 00:17:25.966956 dbus-daemon[1558]: [system] SELinux support is enabled Sep 6 00:17:25.969369 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 6 00:17:25.991840 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 6 00:17:25.997222 extend-filesystems[1563]: Resized partition /dev/sda9 Sep 6 00:17:25.992166 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 6 00:17:26.000306 extend-filesystems[1595]: resize2fs 1.47.1 (20-May-2024) Sep 6 00:17:26.020748 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Sep 6 00:17:25.993654 systemd[1]: motdgen.service: Deactivated successfully. Sep 6 00:17:26.020899 jq[1587]: true Sep 6 00:17:26.000888 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 6 00:17:26.014036 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 6 00:17:26.034869 update_engine[1585]: I20250906 00:17:26.031204 1585 main.cc:92] Flatcar Update Engine starting Sep 6 00:17:26.014292 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 6 00:17:26.050554 update_engine[1585]: I20250906 00:17:26.042290 1585 update_check_scheduler.cc:74] Next update check in 2m59s Sep 6 00:17:26.069342 systemd-logind[1579]: New seat seat0. Sep 6 00:17:26.072505 (ntainerd)[1598]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 6 00:17:26.073131 jq[1605]: true Sep 6 00:17:26.072778 systemd-logind[1579]: Watching system buttons on /dev/input/event0 (Power Button) Sep 6 00:17:26.072795 systemd-logind[1579]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Sep 6 00:17:26.073913 systemd[1]: Started systemd-logind.service - User Login Management. Sep 6 00:17:26.099165 systemd-networkd[1245]: eth1: Gained IPv6LL Sep 6 00:17:26.107324 tar[1597]: linux-arm64/helm Sep 6 00:17:26.105561 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 6 00:17:26.105605 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 6 00:17:26.109291 dbus-daemon[1558]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 6 00:17:26.109642 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 6 00:17:26.109670 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 6 00:17:26.110576 systemd[1]: Started update-engine.service - Update Engine. Sep 6 00:17:26.115593 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 6 00:17:26.117655 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 6 00:17:26.146696 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 6 00:17:26.148872 systemd[1]: Reached target network-online.target - Network is Online. Sep 6 00:17:26.166451 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Sep 6 00:17:26.177119 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 6 00:17:26.185846 extend-filesystems[1595]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Sep 6 00:17:26.185846 extend-filesystems[1595]: old_desc_blocks = 1, new_desc_blocks = 5 Sep 6 00:17:26.185846 extend-filesystems[1595]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Sep 6 00:17:26.191612 extend-filesystems[1563]: Resized filesystem in /dev/sda9 Sep 6 00:17:26.191612 extend-filesystems[1563]: Found sr0 Sep 6 00:17:26.201470 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1249) Sep 6 00:17:26.216087 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 6 00:17:26.223303 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 6 00:17:26.225958 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 6 00:17:26.228310 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 6 00:17:26.230590 systemd-networkd[1245]: eth0: Gained IPv6LL Sep 6 00:17:26.235157 bash[1644]: Updated "/home/core/.ssh/authorized_keys" Sep 6 00:17:26.248267 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 6 00:17:26.269736 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 6 00:17:26.274728 systemd[1]: Starting sshkeys.service... Sep 6 00:17:26.325613 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 6 00:17:26.336773 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 6 00:17:26.347258 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 6 00:17:26.411839 locksmithd[1621]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 6 00:17:26.423457 coreos-metadata[1662]: Sep 06 00:17:26.421 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Sep 6 00:17:26.425131 coreos-metadata[1662]: Sep 06 00:17:26.424 INFO Fetch successful Sep 6 00:17:26.427895 unknown[1662]: wrote ssh authorized keys file for user: core Sep 6 00:17:26.436445 containerd[1598]: time="2025-09-06T00:17:26.433488040Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 6 00:17:26.462727 update-ssh-keys[1676]: Updated "/home/core/.ssh/authorized_keys" Sep 6 00:17:26.465939 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 6 00:17:26.479823 systemd[1]: Finished sshkeys.service. Sep 6 00:17:26.505221 containerd[1598]: time="2025-09-06T00:17:26.504230720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:17:26.515674 containerd[1598]: time="2025-09-06T00:17:26.515185360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.103-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:17:26.515674 containerd[1598]: time="2025-09-06T00:17:26.515231400Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 6 00:17:26.515674 containerd[1598]: time="2025-09-06T00:17:26.515252800Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 6 00:17:26.516648 containerd[1598]: time="2025-09-06T00:17:26.516616160Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 6 00:17:26.518172 containerd[1598]: time="2025-09-06T00:17:26.517093720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 6 00:17:26.518172 containerd[1598]: time="2025-09-06T00:17:26.517198880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:17:26.518172 containerd[1598]: time="2025-09-06T00:17:26.517214640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:17:26.518172 containerd[1598]: time="2025-09-06T00:17:26.517494520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:17:26.518172 containerd[1598]: time="2025-09-06T00:17:26.517514720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 6 00:17:26.518172 containerd[1598]: time="2025-09-06T00:17:26.517527880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:17:26.518172 containerd[1598]: time="2025-09-06T00:17:26.517537800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 6 00:17:26.518172 containerd[1598]: time="2025-09-06T00:17:26.517610880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:17:26.518172 containerd[1598]: time="2025-09-06T00:17:26.517801760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:17:26.518172 containerd[1598]: time="2025-09-06T00:17:26.517928560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:17:26.518172 containerd[1598]: time="2025-09-06T00:17:26.517943120Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 6 00:17:26.518472 containerd[1598]: time="2025-09-06T00:17:26.518082760Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 6 00:17:26.518472 containerd[1598]: time="2025-09-06T00:17:26.518129360Z" level=info msg="metadata content store policy set" policy=shared Sep 6 00:17:26.524624 containerd[1598]: time="2025-09-06T00:17:26.524459160Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 6 00:17:26.524624 containerd[1598]: time="2025-09-06T00:17:26.524530640Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 6 00:17:26.524624 containerd[1598]: time="2025-09-06T00:17:26.524549240Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 6 00:17:26.524624 containerd[1598]: time="2025-09-06T00:17:26.524565520Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 6 00:17:26.524624 containerd[1598]: time="2025-09-06T00:17:26.524582400Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 6 00:17:26.526461 containerd[1598]: time="2025-09-06T00:17:26.525032040Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 6 00:17:26.526461 containerd[1598]: time="2025-09-06T00:17:26.525378120Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 6 00:17:26.526763 containerd[1598]: time="2025-09-06T00:17:26.526715280Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 6 00:17:26.527780 containerd[1598]: time="2025-09-06T00:17:26.527757680Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 6 00:17:26.527860 containerd[1598]: time="2025-09-06T00:17:26.527846680Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 6 00:17:26.527916 containerd[1598]: time="2025-09-06T00:17:26.527904360Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 6 00:17:26.528042 containerd[1598]: time="2025-09-06T00:17:26.527978800Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 6 00:17:26.530117 containerd[1598]: time="2025-09-06T00:17:26.528494600Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 6 00:17:26.530117 containerd[1598]: time="2025-09-06T00:17:26.528520760Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 6 00:17:26.530117 containerd[1598]: time="2025-09-06T00:17:26.528537920Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 6 00:17:26.530117 containerd[1598]: time="2025-09-06T00:17:26.528551000Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 6 00:17:26.530117 containerd[1598]: time="2025-09-06T00:17:26.528562880Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 6 00:17:26.530117 containerd[1598]: time="2025-09-06T00:17:26.528579760Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 6 00:17:26.530117 containerd[1598]: time="2025-09-06T00:17:26.528601160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 6 00:17:26.530117 containerd[1598]: time="2025-09-06T00:17:26.528615280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 6 00:17:26.530117 containerd[1598]: time="2025-09-06T00:17:26.528629280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 6 00:17:26.530117 containerd[1598]: time="2025-09-06T00:17:26.528643000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 6 00:17:26.530117 containerd[1598]: time="2025-09-06T00:17:26.528664920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 6 00:17:26.530117 containerd[1598]: time="2025-09-06T00:17:26.528678960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 6 00:17:26.530117 containerd[1598]: time="2025-09-06T00:17:26.528691080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 6 00:17:26.530117 containerd[1598]: time="2025-09-06T00:17:26.528704320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 6 00:17:26.530418 containerd[1598]: time="2025-09-06T00:17:26.528717480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 6 00:17:26.530418 containerd[1598]: time="2025-09-06T00:17:26.528732320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 6 00:17:26.530418 containerd[1598]: time="2025-09-06T00:17:26.528744000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 6 00:17:26.530418 containerd[1598]: time="2025-09-06T00:17:26.528756080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 6 00:17:26.530418 containerd[1598]: time="2025-09-06T00:17:26.528772320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 6 00:17:26.530418 containerd[1598]: time="2025-09-06T00:17:26.528793680Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 6 00:17:26.530418 containerd[1598]: time="2025-09-06T00:17:26.528819200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 6 00:17:26.530418 containerd[1598]: time="2025-09-06T00:17:26.528831560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 6 00:17:26.530418 containerd[1598]: time="2025-09-06T00:17:26.528850680Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 6 00:17:26.530418 containerd[1598]: time="2025-09-06T00:17:26.528967400Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 6 00:17:26.530418 containerd[1598]: time="2025-09-06T00:17:26.529000640Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 6 00:17:26.530418 containerd[1598]: time="2025-09-06T00:17:26.529015760Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 6 00:17:26.530418 containerd[1598]: time="2025-09-06T00:17:26.529032160Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 6 00:17:26.530683 containerd[1598]: time="2025-09-06T00:17:26.529042240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 6 00:17:26.530683 containerd[1598]: time="2025-09-06T00:17:26.529136280Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 6 00:17:26.530683 containerd[1598]: time="2025-09-06T00:17:26.529146720Z" level=info msg="NRI interface is disabled by configuration." Sep 6 00:17:26.530683 containerd[1598]: time="2025-09-06T00:17:26.529157880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 6 00:17:26.535581 containerd[1598]: time="2025-09-06T00:17:26.534542360Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 6 00:17:26.535581 containerd[1598]: time="2025-09-06T00:17:26.534642960Z" level=info msg="Connect containerd service" Sep 6 00:17:26.535581 containerd[1598]: time="2025-09-06T00:17:26.534759040Z" level=info msg="using legacy CRI server" Sep 6 00:17:26.535581 containerd[1598]: time="2025-09-06T00:17:26.534766680Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 6 00:17:26.535581 containerd[1598]: time="2025-09-06T00:17:26.534881680Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 6 00:17:26.537718 containerd[1598]: time="2025-09-06T00:17:26.537678920Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:17:26.539873 containerd[1598]: time="2025-09-06T00:17:26.539687640Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 6 00:17:26.539873 containerd[1598]: time="2025-09-06T00:17:26.539752760Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 6 00:17:26.539999 containerd[1598]: time="2025-09-06T00:17:26.539913480Z" level=info msg="Start subscribing containerd event" Sep 6 00:17:26.539999 containerd[1598]: time="2025-09-06T00:17:26.539957280Z" level=info msg="Start recovering state" Sep 6 00:17:26.540093 containerd[1598]: time="2025-09-06T00:17:26.540040200Z" level=info msg="Start event monitor" Sep 6 00:17:26.540093 containerd[1598]: time="2025-09-06T00:17:26.540061480Z" level=info msg="Start snapshots syncer" Sep 6 00:17:26.540093 containerd[1598]: time="2025-09-06T00:17:26.540071920Z" level=info msg="Start cni network conf syncer for default" Sep 6 00:17:26.540093 containerd[1598]: time="2025-09-06T00:17:26.540078840Z" level=info msg="Start streaming server" Sep 6 00:17:26.540326 systemd[1]: Started containerd.service - containerd container runtime. Sep 6 00:17:26.541508 containerd[1598]: time="2025-09-06T00:17:26.541463960Z" level=info msg="containerd successfully booted in 0.112606s" Sep 6 00:17:27.151062 tar[1597]: linux-arm64/LICENSE Sep 6 00:17:27.152834 tar[1597]: linux-arm64/README.md Sep 6 00:17:27.172487 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 6 00:17:27.292746 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 00:17:27.295746 (kubelet)[1697]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 6 00:17:27.811705 kubelet[1697]: E0906 00:17:27.811645 1697 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:17:27.816694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:17:27.816948 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:17:27.886752 sshd_keygen[1593]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 6 00:17:27.914794 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 6 00:17:27.922148 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 6 00:17:27.940705 systemd[1]: issuegen.service: Deactivated successfully. Sep 6 00:17:27.941034 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 6 00:17:27.955338 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 6 00:17:27.968254 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 6 00:17:27.975994 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 6 00:17:27.978958 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 6 00:17:27.980182 systemd[1]: Reached target getty.target - Login Prompts. Sep 6 00:17:27.980843 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 6 00:17:27.981634 systemd[1]: Startup finished in 6.226s (kernel) + 5.479s (userspace) = 11.706s. Sep 6 00:17:38.067562 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 6 00:17:38.081817 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 6 00:17:38.207681 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 00:17:38.212491 (kubelet)[1742]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 6 00:17:38.275941 kubelet[1742]: E0906 00:17:38.275888 1742 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:17:38.281663 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:17:38.281837 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:17:48.532484 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 6 00:17:48.540839 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 6 00:17:48.672849 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 00:17:48.673373 (kubelet)[1762]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 6 00:17:48.731258 kubelet[1762]: E0906 00:17:48.729181 1762 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:17:48.732680 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:17:48.732932 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:17:58.983200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 6 00:17:58.990730 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 6 00:17:59.114895 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 00:17:59.130213 (kubelet)[1782]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 6 00:17:59.180760 kubelet[1782]: E0906 00:17:59.180673 1782 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:17:59.185647 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:17:59.186056 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:18:04.895331 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 6 00:18:04.909961 systemd[1]: Started sshd@0-91.98.90.164:22-139.178.68.195:60144.service - OpenSSH per-connection server daemon (139.178.68.195:60144). Sep 6 00:18:05.914577 sshd[1790]: Accepted publickey for core from 139.178.68.195 port 60144 ssh2: RSA SHA256:jxc91lYC6jGmo2vsfpcbx31/qXJlPFNhK53iVaWpnSg Sep 6 00:18:05.917332 sshd[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:18:05.931820 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 6 00:18:05.941869 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 6 00:18:05.947580 systemd-logind[1579]: New session 1 of user core. Sep 6 00:18:05.967659 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 6 00:18:05.985034 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 6 00:18:06.001196 (systemd)[1796]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:18:06.117601 systemd[1796]: Queued start job for default target default.target. Sep 6 00:18:06.118029 systemd[1796]: Created slice app.slice - User Application Slice. Sep 6 00:18:06.118048 systemd[1796]: Reached target paths.target - Paths. Sep 6 00:18:06.118059 systemd[1796]: Reached target timers.target - Timers. Sep 6 00:18:06.125690 systemd[1796]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 6 00:18:06.136677 systemd[1796]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 6 00:18:06.136891 systemd[1796]: Reached target sockets.target - Sockets. Sep 6 00:18:06.136909 systemd[1796]: Reached target basic.target - Basic System. Sep 6 00:18:06.136956 systemd[1796]: Reached target default.target - Main User Target. Sep 6 00:18:06.136983 systemd[1796]: Startup finished in 126ms. Sep 6 00:18:06.137591 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 6 00:18:06.142826 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 6 00:18:06.843883 systemd[1]: Started sshd@1-91.98.90.164:22-139.178.68.195:60146.service - OpenSSH per-connection server daemon (139.178.68.195:60146). Sep 6 00:18:07.837097 sshd[1808]: Accepted publickey for core from 139.178.68.195 port 60146 ssh2: RSA SHA256:jxc91lYC6jGmo2vsfpcbx31/qXJlPFNhK53iVaWpnSg Sep 6 00:18:07.839489 sshd[1808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:18:07.846510 systemd-logind[1579]: New session 2 of user core. Sep 6 00:18:07.851919 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 6 00:18:08.528945 sshd[1808]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:08.533852 systemd[1]: sshd@1-91.98.90.164:22-139.178.68.195:60146.service: Deactivated successfully. Sep 6 00:18:08.537470 systemd-logind[1579]: Session 2 logged out. Waiting for processes to exit. Sep 6 00:18:08.538382 systemd[1]: session-2.scope: Deactivated successfully. Sep 6 00:18:08.540873 systemd-logind[1579]: Removed session 2. Sep 6 00:18:08.701971 systemd[1]: Started sshd@2-91.98.90.164:22-139.178.68.195:60156.service - OpenSSH per-connection server daemon (139.178.68.195:60156). Sep 6 00:18:09.425808 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 6 00:18:09.434775 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 6 00:18:09.558718 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 00:18:09.571158 (kubelet)[1830]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 6 00:18:09.615890 kubelet[1830]: E0906 00:18:09.615826 1830 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:18:09.617901 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:18:09.618043 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:18:09.702628 sshd[1816]: Accepted publickey for core from 139.178.68.195 port 60156 ssh2: RSA SHA256:jxc91lYC6jGmo2vsfpcbx31/qXJlPFNhK53iVaWpnSg Sep 6 00:18:09.704807 sshd[1816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:18:09.710885 systemd-logind[1579]: New session 3 of user core. Sep 6 00:18:09.716946 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 6 00:18:10.389798 sshd[1816]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:10.395865 systemd-logind[1579]: Session 3 logged out. Waiting for processes to exit. Sep 6 00:18:10.396379 systemd[1]: sshd@2-91.98.90.164:22-139.178.68.195:60156.service: Deactivated successfully. Sep 6 00:18:10.398822 systemd[1]: session-3.scope: Deactivated successfully. Sep 6 00:18:10.401946 systemd-logind[1579]: Removed session 3. Sep 6 00:18:10.573833 systemd[1]: Started sshd@3-91.98.90.164:22-139.178.68.195:51054.service - OpenSSH per-connection server daemon (139.178.68.195:51054). Sep 6 00:18:11.277518 update_engine[1585]: I20250906 00:18:11.276685 1585 update_attempter.cc:509] Updating boot flags... Sep 6 00:18:11.330451 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1855) Sep 6 00:18:11.403563 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1858) Sep 6 00:18:11.575378 sshd[1844]: Accepted publickey for core from 139.178.68.195 port 51054 ssh2: RSA SHA256:jxc91lYC6jGmo2vsfpcbx31/qXJlPFNhK53iVaWpnSg Sep 6 00:18:11.577885 sshd[1844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:18:11.582738 systemd-logind[1579]: New session 4 of user core. Sep 6 00:18:11.588935 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 6 00:18:12.271826 sshd[1844]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:12.277242 systemd[1]: sshd@3-91.98.90.164:22-139.178.68.195:51054.service: Deactivated successfully. Sep 6 00:18:12.281941 systemd[1]: session-4.scope: Deactivated successfully. Sep 6 00:18:12.282933 systemd-logind[1579]: Session 4 logged out. Waiting for processes to exit. Sep 6 00:18:12.284162 systemd-logind[1579]: Removed session 4. Sep 6 00:18:12.463700 systemd[1]: Started sshd@4-91.98.90.164:22-139.178.68.195:51060.service - OpenSSH per-connection server daemon (139.178.68.195:51060). Sep 6 00:18:13.520919 sshd[1870]: Accepted publickey for core from 139.178.68.195 port 51060 ssh2: RSA SHA256:jxc91lYC6jGmo2vsfpcbx31/qXJlPFNhK53iVaWpnSg Sep 6 00:18:13.522951 sshd[1870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:18:13.527734 systemd-logind[1579]: New session 5 of user core. Sep 6 00:18:13.536085 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 6 00:18:14.090807 sudo[1874]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 6 00:18:14.091695 sudo[1874]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 6 00:18:14.110832 sudo[1874]: pam_unix(sudo:session): session closed for user root Sep 6 00:18:14.283949 sshd[1870]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:14.291292 systemd-logind[1579]: Session 5 logged out. Waiting for processes to exit. Sep 6 00:18:14.292207 systemd[1]: sshd@4-91.98.90.164:22-139.178.68.195:51060.service: Deactivated successfully. Sep 6 00:18:14.295879 systemd[1]: session-5.scope: Deactivated successfully. Sep 6 00:18:14.297189 systemd-logind[1579]: Removed session 5. Sep 6 00:18:14.468907 systemd[1]: Started sshd@5-91.98.90.164:22-139.178.68.195:51068.service - OpenSSH per-connection server daemon (139.178.68.195:51068). Sep 6 00:18:15.522997 sshd[1879]: Accepted publickey for core from 139.178.68.195 port 51068 ssh2: RSA SHA256:jxc91lYC6jGmo2vsfpcbx31/qXJlPFNhK53iVaWpnSg Sep 6 00:18:15.526992 sshd[1879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:18:15.534909 systemd-logind[1579]: New session 6 of user core. Sep 6 00:18:15.546624 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 6 00:18:16.082106 sudo[1884]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 6 00:18:16.082590 sudo[1884]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 6 00:18:16.087791 sudo[1884]: pam_unix(sudo:session): session closed for user root Sep 6 00:18:16.093752 sudo[1883]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 6 00:18:16.094046 sudo[1883]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 6 00:18:16.113861 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 6 00:18:16.116567 auditctl[1887]: No rules Sep 6 00:18:16.117362 systemd[1]: audit-rules.service: Deactivated successfully. Sep 6 00:18:16.117729 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 6 00:18:16.122773 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 6 00:18:16.159641 augenrules[1906]: No rules Sep 6 00:18:16.160965 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 6 00:18:16.162833 sudo[1883]: pam_unix(sudo:session): session closed for user root Sep 6 00:18:16.336616 sshd[1879]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:16.342118 systemd-logind[1579]: Session 6 logged out. Waiting for processes to exit. Sep 6 00:18:16.342366 systemd[1]: sshd@5-91.98.90.164:22-139.178.68.195:51068.service: Deactivated successfully. Sep 6 00:18:16.347059 systemd[1]: session-6.scope: Deactivated successfully. Sep 6 00:18:16.348635 systemd-logind[1579]: Removed session 6. Sep 6 00:18:16.497828 systemd[1]: Started sshd@6-91.98.90.164:22-139.178.68.195:51076.service - OpenSSH per-connection server daemon (139.178.68.195:51076). Sep 6 00:18:17.492306 sshd[1915]: Accepted publickey for core from 139.178.68.195 port 51076 ssh2: RSA SHA256:jxc91lYC6jGmo2vsfpcbx31/qXJlPFNhK53iVaWpnSg Sep 6 00:18:17.495096 sshd[1915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:18:17.501074 systemd-logind[1579]: New session 7 of user core. Sep 6 00:18:17.505903 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 6 00:18:18.027090 sudo[1919]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 6 00:18:18.027807 sudo[1919]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 6 00:18:18.351113 (dockerd)[1934]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 6 00:18:18.351293 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 6 00:18:18.596283 dockerd[1934]: time="2025-09-06T00:18:18.595602427Z" level=info msg="Starting up" Sep 6 00:18:18.674894 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport191596449-merged.mount: Deactivated successfully. Sep 6 00:18:18.694147 systemd[1]: var-lib-docker-metacopy\x2dcheck2562686206-merged.mount: Deactivated successfully. Sep 6 00:18:18.706136 dockerd[1934]: time="2025-09-06T00:18:18.706081667Z" level=info msg="Loading containers: start." Sep 6 00:18:18.822475 kernel: Initializing XFRM netlink socket Sep 6 00:18:18.909511 systemd-networkd[1245]: docker0: Link UP Sep 6 00:18:18.936027 dockerd[1934]: time="2025-09-06T00:18:18.935880397Z" level=info msg="Loading containers: done." Sep 6 00:18:18.954488 dockerd[1934]: time="2025-09-06T00:18:18.954377519Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 6 00:18:18.954754 dockerd[1934]: time="2025-09-06T00:18:18.954540004Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 6 00:18:18.954754 dockerd[1934]: time="2025-09-06T00:18:18.954672127Z" level=info msg="Daemon has completed initialization" Sep 6 00:18:18.994172 dockerd[1934]: time="2025-09-06T00:18:18.993999838Z" level=info msg="API listen on /run/docker.sock" Sep 6 00:18:18.994766 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 6 00:18:19.675946 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 6 00:18:19.694813 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 6 00:18:19.817800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 00:18:19.834123 (kubelet)[2084]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 6 00:18:19.875147 kubelet[2084]: E0906 00:18:19.875084 2084 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:18:19.878521 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:18:19.879047 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:18:20.032740 containerd[1598]: time="2025-09-06T00:18:20.031868815Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 6 00:18:20.733542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3487145397.mount: Deactivated successfully. Sep 6 00:18:23.097452 containerd[1598]: time="2025-09-06T00:18:23.095850460Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:18:23.097452 containerd[1598]: time="2025-09-06T00:18:23.097368131Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=25652533" Sep 6 00:18:23.098070 containerd[1598]: time="2025-09-06T00:18:23.098033864Z" level=info msg="ImageCreate event name:\"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:18:23.101085 containerd[1598]: time="2025-09-06T00:18:23.101034206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:18:23.102601 containerd[1598]: time="2025-09-06T00:18:23.102551597Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"25649241\" in 3.0706375s" Sep 6 00:18:23.102601 containerd[1598]: time="2025-09-06T00:18:23.102594718Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\"" Sep 6 00:18:23.105806 containerd[1598]: time="2025-09-06T00:18:23.105720941Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 6 00:18:25.284961 containerd[1598]: time="2025-09-06T00:18:25.284891934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:18:25.286672 containerd[1598]: time="2025-09-06T00:18:25.286625685Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=22460329" Sep 6 00:18:25.289007 containerd[1598]: time="2025-09-06T00:18:25.287415339Z" level=info msg="ImageCreate event name:\"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:18:25.291093 containerd[1598]: time="2025-09-06T00:18:25.291038324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:18:25.292407 containerd[1598]: time="2025-09-06T00:18:25.292364548Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"23997423\" in 2.186357761s" Sep 6 00:18:25.292621 containerd[1598]: time="2025-09-06T00:18:25.292598232Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\"" Sep 6 00:18:25.293352 containerd[1598]: time="2025-09-06T00:18:25.293193043Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 6 00:18:26.900456 containerd[1598]: time="2025-09-06T00:18:26.899100663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:18:26.900928 containerd[1598]: time="2025-09-06T00:18:26.900833412Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=17125923" Sep 6 00:18:26.901240 containerd[1598]: time="2025-09-06T00:18:26.901207018Z" level=info msg="ImageCreate event name:\"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:18:26.906914 containerd[1598]: time="2025-09-06T00:18:26.906850314Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:18:26.908257 containerd[1598]: time="2025-09-06T00:18:26.908208616Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"18663035\" in 1.614737368s" Sep 6 00:18:26.908464 containerd[1598]: time="2025-09-06T00:18:26.908444700Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\"" Sep 6 00:18:26.909392 containerd[1598]: time="2025-09-06T00:18:26.909344716Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 6 00:18:28.305181 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2913323902.mount: Deactivated successfully. Sep 6 00:18:28.642837 containerd[1598]: time="2025-09-06T00:18:28.642682088Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:18:28.644454 containerd[1598]: time="2025-09-06T00:18:28.644339472Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=26916121" Sep 6 00:18:28.646115 containerd[1598]: time="2025-09-06T00:18:28.646047338Z" level=info msg="ImageCreate event name:\"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:18:28.650909 containerd[1598]: time="2025-09-06T00:18:28.649690072Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:18:28.650909 containerd[1598]: time="2025-09-06T00:18:28.650701087Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"26915114\" in 1.74131329s" Sep 6 00:18:28.650909 containerd[1598]: time="2025-09-06T00:18:28.650743847Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\"" Sep 6 00:18:28.651780 containerd[1598]: time="2025-09-06T00:18:28.651748742Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 6 00:18:29.260603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount874938163.mount: Deactivated successfully. Sep 6 00:18:29.925974 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Sep 6 00:18:29.936745 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 6 00:18:30.136684 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 00:18:30.141524 (kubelet)[2225]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 6 00:18:30.185393 kubelet[2225]: E0906 00:18:30.184608 2225 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:18:30.190027 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:18:30.190152 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:18:30.441534 containerd[1598]: time="2025-09-06T00:18:30.441326461Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:18:30.443157 containerd[1598]: time="2025-09-06T00:18:30.442864201Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Sep 6 00:18:30.444787 containerd[1598]: time="2025-09-06T00:18:30.444289980Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:18:30.447831 containerd[1598]: time="2025-09-06T00:18:30.447787665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:18:30.449439 containerd[1598]: time="2025-09-06T00:18:30.449372926Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.797479782s" Sep 6 00:18:30.449522 containerd[1598]: time="2025-09-06T00:18:30.449441127Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 6 00:18:30.449962 containerd[1598]: time="2025-09-06T00:18:30.449923893Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 6 00:18:31.006407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount425104825.mount: Deactivated successfully. Sep 6 00:18:31.014288 containerd[1598]: time="2025-09-06T00:18:31.014235349Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:18:31.015276 containerd[1598]: time="2025-09-06T00:18:31.015233681Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Sep 6 00:18:31.016912 containerd[1598]: time="2025-09-06T00:18:31.016543257Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:18:31.019389 containerd[1598]: time="2025-09-06T00:18:31.019348771Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:18:31.020308 containerd[1598]: time="2025-09-06T00:18:31.020277023Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 570.31329ms" Sep 6 00:18:31.020433 containerd[1598]: time="2025-09-06T00:18:31.020402424Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 6 00:18:31.023416 containerd[1598]: time="2025-09-06T00:18:31.021456877Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 6 00:18:31.659076 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2391568630.mount: Deactivated successfully. Sep 6 00:18:34.775478 containerd[1598]: time="2025-09-06T00:18:34.775150828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:18:34.779455 containerd[1598]: time="2025-09-06T00:18:34.779386270Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537235" Sep 6 00:18:34.782395 containerd[1598]: time="2025-09-06T00:18:34.781786815Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:18:34.787636 containerd[1598]: time="2025-09-06T00:18:34.787580193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:18:34.789736 containerd[1598]: time="2025-09-06T00:18:34.789677814Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.768181376s" Sep 6 00:18:34.789891 containerd[1598]: time="2025-09-06T00:18:34.789870096Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 6 00:18:39.914278 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 00:18:39.925349 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 6 00:18:39.962702 systemd[1]: Reloading requested from client PID 2322 ('systemctl') (unit session-7.scope)... Sep 6 00:18:39.962727 systemd[1]: Reloading... Sep 6 00:18:40.085445 zram_generator::config[2368]: No configuration found. Sep 6 00:18:40.193899 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:18:40.263585 systemd[1]: Reloading finished in 300 ms. Sep 6 00:18:40.307942 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 6 00:18:40.308170 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 6 00:18:40.308639 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 00:18:40.313962 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 6 00:18:40.452810 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 00:18:40.463245 (kubelet)[2422]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 6 00:18:40.505926 kubelet[2422]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:18:40.506294 kubelet[2422]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 6 00:18:40.506344 kubelet[2422]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:18:40.506719 kubelet[2422]: I0906 00:18:40.506680 2422 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 00:18:41.627225 kubelet[2422]: I0906 00:18:41.627157 2422 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 6 00:18:41.627225 kubelet[2422]: I0906 00:18:41.627204 2422 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 00:18:41.627892 kubelet[2422]: I0906 00:18:41.627757 2422 server.go:934] "Client rotation is on, will bootstrap in background" Sep 6 00:18:41.656408 kubelet[2422]: E0906 00:18:41.656343 2422 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://91.98.90.164:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 91.98.90.164:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:18:41.657698 kubelet[2422]: I0906 00:18:41.657571 2422 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:18:41.668260 kubelet[2422]: E0906 00:18:41.668216 2422 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 00:18:41.668260 kubelet[2422]: I0906 00:18:41.668259 2422 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 00:18:41.674065 kubelet[2422]: I0906 00:18:41.674020 2422 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 00:18:41.675912 kubelet[2422]: I0906 00:18:41.675870 2422 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 6 00:18:41.676089 kubelet[2422]: I0906 00:18:41.676045 2422 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 00:18:41.676285 kubelet[2422]: I0906 00:18:41.676079 2422 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-n-5ce2877658","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 6 00:18:41.676285 kubelet[2422]: I0906 00:18:41.676284 2422 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 00:18:41.676397 kubelet[2422]: I0906 00:18:41.676295 2422 container_manager_linux.go:300] "Creating device plugin manager" Sep 6 00:18:41.676643 kubelet[2422]: I0906 00:18:41.676604 2422 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:18:41.679582 kubelet[2422]: I0906 00:18:41.679504 2422 kubelet.go:408] "Attempting to sync node with API server" Sep 6 00:18:41.679582 kubelet[2422]: I0906 00:18:41.679570 2422 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 00:18:41.681021 kubelet[2422]: I0906 00:18:41.679603 2422 kubelet.go:314] "Adding apiserver pod source" Sep 6 00:18:41.681021 kubelet[2422]: I0906 00:18:41.679700 2422 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 00:18:41.690061 kubelet[2422]: W0906 00:18:41.689912 2422 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://91.98.90.164:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 91.98.90.164:6443: connect: connection refused Sep 6 00:18:41.690269 kubelet[2422]: E0906 00:18:41.690248 2422 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://91.98.90.164:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 91.98.90.164:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:18:41.690610 kubelet[2422]: I0906 00:18:41.690590 2422 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 6 00:18:41.691678 kubelet[2422]: I0906 00:18:41.691660 2422 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 00:18:41.691878 kubelet[2422]: W0906 00:18:41.691865 2422 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 6 00:18:41.692508 kubelet[2422]: W0906 00:18:41.692446 2422 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://91.98.90.164:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-5ce2877658&limit=500&resourceVersion=0": dial tcp 91.98.90.164:6443: connect: connection refused Sep 6 00:18:41.692508 kubelet[2422]: E0906 00:18:41.692524 2422 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://91.98.90.164:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-5ce2877658&limit=500&resourceVersion=0\": dial tcp 91.98.90.164:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:18:41.693240 kubelet[2422]: I0906 00:18:41.693222 2422 server.go:1274] "Started kubelet" Sep 6 00:18:41.697047 kubelet[2422]: I0906 00:18:41.697016 2422 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 00:18:41.700565 kubelet[2422]: I0906 00:18:41.698372 2422 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 00:18:41.704352 kubelet[2422]: I0906 00:18:41.699110 2422 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 00:18:41.704663 kubelet[2422]: I0906 00:18:41.704637 2422 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 00:18:41.704728 kubelet[2422]: I0906 00:18:41.700738 2422 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 6 00:18:41.704728 kubelet[2422]: I0906 00:18:41.700714 2422 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 6 00:18:41.704872 kubelet[2422]: E0906 00:18:41.700963 2422 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-5ce2877658\" not found" Sep 6 00:18:41.704872 kubelet[2422]: I0906 00:18:41.704844 2422 reconciler.go:26] "Reconciler: start to sync state" Sep 6 00:18:41.704998 kubelet[2422]: W0906 00:18:41.704957 2422 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://91.98.90.164:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.98.90.164:6443: connect: connection refused Sep 6 00:18:41.705049 kubelet[2422]: E0906 00:18:41.705016 2422 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://91.98.90.164:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 91.98.90.164:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:18:41.705114 kubelet[2422]: E0906 00:18:41.705083 2422 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.98.90.164:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-5ce2877658?timeout=10s\": dial tcp 91.98.90.164:6443: connect: connection refused" interval="200ms" Sep 6 00:18:41.708078 kubelet[2422]: I0906 00:18:41.708049 2422 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 00:18:41.708554 kubelet[2422]: I0906 00:18:41.708498 2422 server.go:449] "Adding debug handlers to kubelet server" Sep 6 00:18:41.711460 kubelet[2422]: E0906 00:18:41.709866 2422 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://91.98.90.164:6443/api/v1/namespaces/default/events\": dial tcp 91.98.90.164:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-5-n-5ce2877658.18628970aefee4ab default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-5-n-5ce2877658,UID:ci-4081-3-5-n-5ce2877658,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-n-5ce2877658,},FirstTimestamp:2025-09-06 00:18:41.693197483 +0000 UTC m=+1.224553267,LastTimestamp:2025-09-06 00:18:41.693197483 +0000 UTC m=+1.224553267,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-n-5ce2877658,}" Sep 6 00:18:41.712041 kubelet[2422]: I0906 00:18:41.712017 2422 factory.go:221] Registration of the systemd container factory successfully Sep 6 00:18:41.712416 kubelet[2422]: I0906 00:18:41.712346 2422 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 00:18:41.713198 kubelet[2422]: E0906 00:18:41.713177 2422 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 00:18:41.716392 kubelet[2422]: I0906 00:18:41.716358 2422 factory.go:221] Registration of the containerd container factory successfully Sep 6 00:18:41.736622 kubelet[2422]: I0906 00:18:41.736573 2422 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 00:18:41.738719 kubelet[2422]: I0906 00:18:41.738685 2422 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 00:18:41.739138 kubelet[2422]: I0906 00:18:41.739094 2422 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 6 00:18:41.739235 kubelet[2422]: I0906 00:18:41.739224 2422 kubelet.go:2321] "Starting kubelet main sync loop" Sep 6 00:18:41.739549 kubelet[2422]: E0906 00:18:41.739316 2422 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 00:18:41.741263 kubelet[2422]: W0906 00:18:41.741234 2422 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://91.98.90.164:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.98.90.164:6443: connect: connection refused Sep 6 00:18:41.741640 kubelet[2422]: E0906 00:18:41.741403 2422 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://91.98.90.164:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 91.98.90.164:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:18:41.746476 kubelet[2422]: I0906 00:18:41.746095 2422 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 6 00:18:41.746476 kubelet[2422]: I0906 00:18:41.746123 2422 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 6 00:18:41.746476 kubelet[2422]: I0906 00:18:41.746142 2422 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:18:41.749818 kubelet[2422]: I0906 00:18:41.749772 2422 policy_none.go:49] "None policy: Start" Sep 6 00:18:41.751103 kubelet[2422]: I0906 00:18:41.750758 2422 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 6 00:18:41.751103 kubelet[2422]: I0906 00:18:41.750786 2422 state_mem.go:35] "Initializing new in-memory state store" Sep 6 00:18:41.757926 kubelet[2422]: I0906 00:18:41.756954 2422 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 00:18:41.757926 kubelet[2422]: I0906 00:18:41.757185 2422 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 00:18:41.757926 kubelet[2422]: I0906 00:18:41.757197 2422 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 00:18:41.759197 kubelet[2422]: I0906 00:18:41.759178 2422 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 00:18:41.760719 kubelet[2422]: E0906 00:18:41.760685 2422 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-5-n-5ce2877658\" not found" Sep 6 00:18:41.863385 kubelet[2422]: I0906 00:18:41.863318 2422 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-5ce2877658" Sep 6 00:18:41.864087 kubelet[2422]: E0906 00:18:41.864050 2422 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://91.98.90.164:6443/api/v1/nodes\": dial tcp 91.98.90.164:6443: connect: connection refused" node="ci-4081-3-5-n-5ce2877658" Sep 6 00:18:41.906488 kubelet[2422]: E0906 00:18:41.906257 2422 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.98.90.164:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-5ce2877658?timeout=10s\": dial tcp 91.98.90.164:6443: connect: connection refused" interval="400ms" Sep 6 00:18:42.005990 kubelet[2422]: I0906 00:18:42.005933 2422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ed2721973482fb58394bd4ae7f11b81e-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-5ce2877658\" (UID: \"ed2721973482fb58394bd4ae7f11b81e\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-5ce2877658" Sep 6 00:18:42.006293 kubelet[2422]: I0906 00:18:42.006014 2422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ed2721973482fb58394bd4ae7f11b81e-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-n-5ce2877658\" (UID: \"ed2721973482fb58394bd4ae7f11b81e\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-5ce2877658" Sep 6 00:18:42.006293 kubelet[2422]: I0906 00:18:42.006061 2422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ed2721973482fb58394bd4ae7f11b81e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-n-5ce2877658\" (UID: \"ed2721973482fb58394bd4ae7f11b81e\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-5ce2877658" Sep 6 00:18:42.006293 kubelet[2422]: I0906 00:18:42.006104 2422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b0fa5a88cf200ff135267ae69c96e1de-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-n-5ce2877658\" (UID: \"b0fa5a88cf200ff135267ae69c96e1de\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-5ce2877658" Sep 6 00:18:42.006293 kubelet[2422]: I0906 00:18:42.006138 2422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ed2721973482fb58394bd4ae7f11b81e-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-5ce2877658\" (UID: \"ed2721973482fb58394bd4ae7f11b81e\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-5ce2877658" Sep 6 00:18:42.006293 kubelet[2422]: I0906 00:18:42.006171 2422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ed2721973482fb58394bd4ae7f11b81e-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-n-5ce2877658\" (UID: \"ed2721973482fb58394bd4ae7f11b81e\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-5ce2877658" Sep 6 00:18:42.006454 kubelet[2422]: I0906 00:18:42.006202 2422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/737b4f9e4cd7327927e671c73f688ad9-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-n-5ce2877658\" (UID: \"737b4f9e4cd7327927e671c73f688ad9\") " pod="kube-system/kube-scheduler-ci-4081-3-5-n-5ce2877658" Sep 6 00:18:42.006454 kubelet[2422]: I0906 00:18:42.006232 2422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b0fa5a88cf200ff135267ae69c96e1de-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-n-5ce2877658\" (UID: \"b0fa5a88cf200ff135267ae69c96e1de\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-5ce2877658" Sep 6 00:18:42.006454 kubelet[2422]: I0906 00:18:42.006265 2422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b0fa5a88cf200ff135267ae69c96e1de-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-n-5ce2877658\" (UID: \"b0fa5a88cf200ff135267ae69c96e1de\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-5ce2877658" Sep 6 00:18:42.067996 kubelet[2422]: I0906 00:18:42.067918 2422 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-5ce2877658" Sep 6 00:18:42.068556 kubelet[2422]: E0906 00:18:42.068416 2422 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://91.98.90.164:6443/api/v1/nodes\": dial tcp 91.98.90.164:6443: connect: connection refused" node="ci-4081-3-5-n-5ce2877658" Sep 6 00:18:42.153328 containerd[1598]: time="2025-09-06T00:18:42.153191407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-n-5ce2877658,Uid:b0fa5a88cf200ff135267ae69c96e1de,Namespace:kube-system,Attempt:0,}" Sep 6 00:18:42.158369 containerd[1598]: time="2025-09-06T00:18:42.158226117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-n-5ce2877658,Uid:ed2721973482fb58394bd4ae7f11b81e,Namespace:kube-system,Attempt:0,}" Sep 6 00:18:42.163621 containerd[1598]: time="2025-09-06T00:18:42.163409588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-n-5ce2877658,Uid:737b4f9e4cd7327927e671c73f688ad9,Namespace:kube-system,Attempt:0,}" Sep 6 00:18:42.307306 kubelet[2422]: E0906 00:18:42.307219 2422 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.98.90.164:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-5ce2877658?timeout=10s\": dial tcp 91.98.90.164:6443: connect: connection refused" interval="800ms" Sep 6 00:18:42.472692 kubelet[2422]: I0906 00:18:42.472390 2422 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-5ce2877658" Sep 6 00:18:42.473149 kubelet[2422]: E0906 00:18:42.473116 2422 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://91.98.90.164:6443/api/v1/nodes\": dial tcp 91.98.90.164:6443: connect: connection refused" node="ci-4081-3-5-n-5ce2877658" Sep 6 00:18:42.623104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1891489645.mount: Deactivated successfully. Sep 6 00:18:42.633975 containerd[1598]: time="2025-09-06T00:18:42.632661804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 6 00:18:42.634890 containerd[1598]: time="2025-09-06T00:18:42.634847537Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Sep 6 00:18:42.641590 containerd[1598]: time="2025-09-06T00:18:42.641526577Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 6 00:18:42.644326 containerd[1598]: time="2025-09-06T00:18:42.644283834Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 6 00:18:42.644645 containerd[1598]: time="2025-09-06T00:18:42.644615396Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 6 00:18:42.648059 containerd[1598]: time="2025-09-06T00:18:42.648002176Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 484.470947ms" Sep 6 00:18:42.649170 containerd[1598]: time="2025-09-06T00:18:42.649130383Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 6 00:18:42.651019 containerd[1598]: time="2025-09-06T00:18:42.650977314Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 492.412915ms" Sep 6 00:18:42.653407 containerd[1598]: time="2025-09-06T00:18:42.653361608Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 6 00:18:42.654182 containerd[1598]: time="2025-09-06T00:18:42.654153413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 6 00:18:42.663260 containerd[1598]: time="2025-09-06T00:18:42.663067706Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 509.763259ms" Sep 6 00:18:42.778668 containerd[1598]: time="2025-09-06T00:18:42.778185997Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:18:42.778668 containerd[1598]: time="2025-09-06T00:18:42.778249638Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:18:42.778668 containerd[1598]: time="2025-09-06T00:18:42.778276118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:18:42.779785 containerd[1598]: time="2025-09-06T00:18:42.779408885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:18:42.786872 containerd[1598]: time="2025-09-06T00:18:42.786688248Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:18:42.786872 containerd[1598]: time="2025-09-06T00:18:42.786844089Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:18:42.787046 containerd[1598]: time="2025-09-06T00:18:42.786912850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:18:42.787287 containerd[1598]: time="2025-09-06T00:18:42.787075531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:18:42.788156 containerd[1598]: time="2025-09-06T00:18:42.788084897Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:18:42.788561 containerd[1598]: time="2025-09-06T00:18:42.788415059Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:18:42.788705 containerd[1598]: time="2025-09-06T00:18:42.788571339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:18:42.789137 containerd[1598]: time="2025-09-06T00:18:42.788924222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:18:42.871600 containerd[1598]: time="2025-09-06T00:18:42.871278876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-n-5ce2877658,Uid:b0fa5a88cf200ff135267ae69c96e1de,Namespace:kube-system,Attempt:0,} returns sandbox id \"c631398e03c2c7c1f2f3ea6e1665ee85d3d1373a1f7ce2eeb94550a43a4d0675\"" Sep 6 00:18:42.874134 containerd[1598]: time="2025-09-06T00:18:42.874073253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-n-5ce2877658,Uid:ed2721973482fb58394bd4ae7f11b81e,Namespace:kube-system,Attempt:0,} returns sandbox id \"31cd8394c1922d54d4c0813512d490fafc285ee248d8665fab15b68fab4591ff\"" Sep 6 00:18:42.881487 containerd[1598]: time="2025-09-06T00:18:42.881086055Z" level=info msg="CreateContainer within sandbox \"c631398e03c2c7c1f2f3ea6e1665ee85d3d1373a1f7ce2eeb94550a43a4d0675\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 6 00:18:42.881487 containerd[1598]: time="2025-09-06T00:18:42.881365056Z" level=info msg="CreateContainer within sandbox \"31cd8394c1922d54d4c0813512d490fafc285ee248d8665fab15b68fab4591ff\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 6 00:18:42.886820 containerd[1598]: time="2025-09-06T00:18:42.886437007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-n-5ce2877658,Uid:737b4f9e4cd7327927e671c73f688ad9,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f55124cca41fef826204a6934c35fdd4a5ba0ecb8efb89f341f5603191cc7c0\"" Sep 6 00:18:42.890566 containerd[1598]: time="2025-09-06T00:18:42.890401711Z" level=info msg="CreateContainer within sandbox \"0f55124cca41fef826204a6934c35fdd4a5ba0ecb8efb89f341f5603191cc7c0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 6 00:18:42.904941 containerd[1598]: time="2025-09-06T00:18:42.904868197Z" level=info msg="CreateContainer within sandbox \"c631398e03c2c7c1f2f3ea6e1665ee85d3d1373a1f7ce2eeb94550a43a4d0675\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c27aaabdd7238d193966322da94ae3fb0f51267901ce40dfeb7fcca61e90aa34\"" Sep 6 00:18:42.905651 containerd[1598]: time="2025-09-06T00:18:42.905602322Z" level=info msg="StartContainer for \"c27aaabdd7238d193966322da94ae3fb0f51267901ce40dfeb7fcca61e90aa34\"" Sep 6 00:18:42.911452 containerd[1598]: time="2025-09-06T00:18:42.911295036Z" level=info msg="CreateContainer within sandbox \"31cd8394c1922d54d4c0813512d490fafc285ee248d8665fab15b68fab4591ff\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4028d9153b2c4dbff120fe638a80aa0d76663a8f147866b9161ec992297c14cb\"" Sep 6 00:18:42.914457 containerd[1598]: time="2025-09-06T00:18:42.913692730Z" level=info msg="StartContainer for \"4028d9153b2c4dbff120fe638a80aa0d76663a8f147866b9161ec992297c14cb\"" Sep 6 00:18:42.916790 containerd[1598]: time="2025-09-06T00:18:42.916741189Z" level=info msg="CreateContainer within sandbox \"0f55124cca41fef826204a6934c35fdd4a5ba0ecb8efb89f341f5603191cc7c0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e23d03f56a99938ab54a7a48686e39cbe3e0466433f1522f7567b290e4dcbea7\"" Sep 6 00:18:42.917630 containerd[1598]: time="2025-09-06T00:18:42.917499073Z" level=info msg="StartContainer for \"e23d03f56a99938ab54a7a48686e39cbe3e0466433f1522f7567b290e4dcbea7\"" Sep 6 00:18:42.984604 kubelet[2422]: W0906 00:18:42.984470 2422 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://91.98.90.164:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.98.90.164:6443: connect: connection refused Sep 6 00:18:42.984604 kubelet[2422]: E0906 00:18:42.984612 2422 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://91.98.90.164:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 91.98.90.164:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:18:43.012567 containerd[1598]: time="2025-09-06T00:18:43.011126271Z" level=info msg="StartContainer for \"c27aaabdd7238d193966322da94ae3fb0f51267901ce40dfeb7fcca61e90aa34\" returns successfully" Sep 6 00:18:43.030415 containerd[1598]: time="2025-09-06T00:18:43.028585769Z" level=info msg="StartContainer for \"e23d03f56a99938ab54a7a48686e39cbe3e0466433f1522f7567b290e4dcbea7\" returns successfully" Sep 6 00:18:43.042326 containerd[1598]: time="2025-09-06T00:18:43.042277246Z" level=info msg="StartContainer for \"4028d9153b2c4dbff120fe638a80aa0d76663a8f147866b9161ec992297c14cb\" returns successfully" Sep 6 00:18:43.049721 kubelet[2422]: W0906 00:18:43.049596 2422 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://91.98.90.164:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.98.90.164:6443: connect: connection refused Sep 6 00:18:43.049721 kubelet[2422]: E0906 00:18:43.049672 2422 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://91.98.90.164:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 91.98.90.164:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:18:43.278222 kubelet[2422]: I0906 00:18:43.277618 2422 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-5ce2877658" Sep 6 00:18:45.632648 kubelet[2422]: E0906 00:18:45.632592 2422 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-5-n-5ce2877658\" not found" node="ci-4081-3-5-n-5ce2877658" Sep 6 00:18:45.692250 kubelet[2422]: I0906 00:18:45.692198 2422 apiserver.go:52] "Watching apiserver" Sep 6 00:18:45.705724 kubelet[2422]: I0906 00:18:45.705637 2422 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 6 00:18:45.817134 kubelet[2422]: I0906 00:18:45.816939 2422 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-5-n-5ce2877658" Sep 6 00:18:45.817134 kubelet[2422]: E0906 00:18:45.816985 2422 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081-3-5-n-5ce2877658\": node \"ci-4081-3-5-n-5ce2877658\" not found" Sep 6 00:18:47.912257 systemd[1]: Reloading requested from client PID 2700 ('systemctl') (unit session-7.scope)... Sep 6 00:18:47.912282 systemd[1]: Reloading... Sep 6 00:18:47.994464 zram_generator::config[2741]: No configuration found. Sep 6 00:18:48.107574 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:18:48.185362 systemd[1]: Reloading finished in 272 ms. Sep 6 00:18:48.218235 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 6 00:18:48.230953 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 00:18:48.231313 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 00:18:48.238821 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 6 00:18:48.377689 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 00:18:48.390588 (kubelet)[2795]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 6 00:18:48.433895 kubelet[2795]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:18:48.433895 kubelet[2795]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 6 00:18:48.433895 kubelet[2795]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:18:48.433895 kubelet[2795]: I0906 00:18:48.433841 2795 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 00:18:48.444018 kubelet[2795]: I0906 00:18:48.443859 2795 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 6 00:18:48.444018 kubelet[2795]: I0906 00:18:48.443893 2795 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 00:18:48.444649 kubelet[2795]: I0906 00:18:48.444629 2795 server.go:934] "Client rotation is on, will bootstrap in background" Sep 6 00:18:48.446441 kubelet[2795]: I0906 00:18:48.446082 2795 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 6 00:18:48.448894 kubelet[2795]: I0906 00:18:48.448537 2795 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:18:48.452612 kubelet[2795]: E0906 00:18:48.452557 2795 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 00:18:48.452612 kubelet[2795]: I0906 00:18:48.452604 2795 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 00:18:48.455255 kubelet[2795]: I0906 00:18:48.455204 2795 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 00:18:48.455643 kubelet[2795]: I0906 00:18:48.455624 2795 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 6 00:18:48.455761 kubelet[2795]: I0906 00:18:48.455738 2795 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 00:18:48.455931 kubelet[2795]: I0906 00:18:48.455761 2795 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-n-5ce2877658","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 6 00:18:48.456001 kubelet[2795]: I0906 00:18:48.455938 2795 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 00:18:48.456001 kubelet[2795]: I0906 00:18:48.455950 2795 container_manager_linux.go:300] "Creating device plugin manager" Sep 6 00:18:48.456001 kubelet[2795]: I0906 00:18:48.455983 2795 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:18:48.456111 kubelet[2795]: I0906 00:18:48.456078 2795 kubelet.go:408] "Attempting to sync node with API server" Sep 6 00:18:48.456111 kubelet[2795]: I0906 00:18:48.456089 2795 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 00:18:48.456111 kubelet[2795]: I0906 00:18:48.456108 2795 kubelet.go:314] "Adding apiserver pod source" Sep 6 00:18:48.458484 kubelet[2795]: I0906 00:18:48.457718 2795 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 00:18:48.469433 kubelet[2795]: I0906 00:18:48.467615 2795 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 6 00:18:48.469433 kubelet[2795]: I0906 00:18:48.468144 2795 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 00:18:48.472301 kubelet[2795]: I0906 00:18:48.471553 2795 server.go:1274] "Started kubelet" Sep 6 00:18:48.475533 kubelet[2795]: I0906 00:18:48.475497 2795 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 00:18:48.476970 kubelet[2795]: I0906 00:18:48.476295 2795 server.go:449] "Adding debug handlers to kubelet server" Sep 6 00:18:48.477376 kubelet[2795]: I0906 00:18:48.477303 2795 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 00:18:48.480022 kubelet[2795]: I0906 00:18:48.479989 2795 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 00:18:48.481385 kubelet[2795]: I0906 00:18:48.481331 2795 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 00:18:48.489691 kubelet[2795]: I0906 00:18:48.488957 2795 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 00:18:48.490510 kubelet[2795]: I0906 00:18:48.490485 2795 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 6 00:18:48.490728 kubelet[2795]: E0906 00:18:48.490706 2795 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-5ce2877658\" not found" Sep 6 00:18:48.498443 kubelet[2795]: I0906 00:18:48.494326 2795 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 6 00:18:48.498443 kubelet[2795]: I0906 00:18:48.494580 2795 reconciler.go:26] "Reconciler: start to sync state" Sep 6 00:18:48.498443 kubelet[2795]: I0906 00:18:48.497002 2795 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 00:18:48.498443 kubelet[2795]: I0906 00:18:48.497845 2795 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 00:18:48.498443 kubelet[2795]: I0906 00:18:48.497865 2795 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 6 00:18:48.498443 kubelet[2795]: I0906 00:18:48.497887 2795 kubelet.go:2321] "Starting kubelet main sync loop" Sep 6 00:18:48.498443 kubelet[2795]: E0906 00:18:48.497931 2795 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 00:18:48.507449 kubelet[2795]: I0906 00:18:48.505366 2795 factory.go:221] Registration of the systemd container factory successfully Sep 6 00:18:48.507449 kubelet[2795]: I0906 00:18:48.505531 2795 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 00:18:48.509622 kubelet[2795]: I0906 00:18:48.508371 2795 factory.go:221] Registration of the containerd container factory successfully Sep 6 00:18:48.533374 kubelet[2795]: E0906 00:18:48.532891 2795 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 00:18:48.582359 kubelet[2795]: I0906 00:18:48.582324 2795 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 6 00:18:48.582359 kubelet[2795]: I0906 00:18:48.582346 2795 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 6 00:18:48.582359 kubelet[2795]: I0906 00:18:48.582368 2795 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:18:48.582765 kubelet[2795]: I0906 00:18:48.582719 2795 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 6 00:18:48.582765 kubelet[2795]: I0906 00:18:48.582741 2795 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 6 00:18:48.582849 kubelet[2795]: I0906 00:18:48.582777 2795 policy_none.go:49] "None policy: Start" Sep 6 00:18:48.584032 kubelet[2795]: I0906 00:18:48.583994 2795 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 6 00:18:48.584095 kubelet[2795]: I0906 00:18:48.584043 2795 state_mem.go:35] "Initializing new in-memory state store" Sep 6 00:18:48.584257 kubelet[2795]: I0906 00:18:48.584231 2795 state_mem.go:75] "Updated machine memory state" Sep 6 00:18:48.586489 kubelet[2795]: I0906 00:18:48.585774 2795 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 00:18:48.586489 kubelet[2795]: I0906 00:18:48.585983 2795 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 00:18:48.586489 kubelet[2795]: I0906 00:18:48.585998 2795 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 00:18:48.586921 kubelet[2795]: I0906 00:18:48.586890 2795 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 00:18:48.691314 kubelet[2795]: I0906 00:18:48.691275 2795 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-5ce2877658" Sep 6 00:18:48.695857 kubelet[2795]: I0906 00:18:48.695533 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ed2721973482fb58394bd4ae7f11b81e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-n-5ce2877658\" (UID: \"ed2721973482fb58394bd4ae7f11b81e\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-5ce2877658" Sep 6 00:18:48.695857 kubelet[2795]: I0906 00:18:48.695600 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/737b4f9e4cd7327927e671c73f688ad9-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-n-5ce2877658\" (UID: \"737b4f9e4cd7327927e671c73f688ad9\") " pod="kube-system/kube-scheduler-ci-4081-3-5-n-5ce2877658" Sep 6 00:18:48.695857 kubelet[2795]: I0906 00:18:48.695644 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b0fa5a88cf200ff135267ae69c96e1de-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-n-5ce2877658\" (UID: \"b0fa5a88cf200ff135267ae69c96e1de\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-5ce2877658" Sep 6 00:18:48.695857 kubelet[2795]: I0906 00:18:48.695679 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b0fa5a88cf200ff135267ae69c96e1de-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-n-5ce2877658\" (UID: \"b0fa5a88cf200ff135267ae69c96e1de\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-5ce2877658" Sep 6 00:18:48.695857 kubelet[2795]: I0906 00:18:48.695715 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ed2721973482fb58394bd4ae7f11b81e-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-5ce2877658\" (UID: \"ed2721973482fb58394bd4ae7f11b81e\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-5ce2877658" Sep 6 00:18:48.696054 kubelet[2795]: I0906 00:18:48.695753 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ed2721973482fb58394bd4ae7f11b81e-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-n-5ce2877658\" (UID: \"ed2721973482fb58394bd4ae7f11b81e\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-5ce2877658" Sep 6 00:18:48.696054 kubelet[2795]: I0906 00:18:48.695787 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ed2721973482fb58394bd4ae7f11b81e-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-5ce2877658\" (UID: \"ed2721973482fb58394bd4ae7f11b81e\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-5ce2877658" Sep 6 00:18:48.696054 kubelet[2795]: I0906 00:18:48.695822 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ed2721973482fb58394bd4ae7f11b81e-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-n-5ce2877658\" (UID: \"ed2721973482fb58394bd4ae7f11b81e\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-5ce2877658" Sep 6 00:18:48.697638 kubelet[2795]: I0906 00:18:48.697062 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b0fa5a88cf200ff135267ae69c96e1de-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-n-5ce2877658\" (UID: \"b0fa5a88cf200ff135267ae69c96e1de\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-5ce2877658" Sep 6 00:18:48.703217 kubelet[2795]: I0906 00:18:48.703171 2795 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-3-5-n-5ce2877658" Sep 6 00:18:48.703346 kubelet[2795]: I0906 00:18:48.703272 2795 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-5-n-5ce2877658" Sep 6 00:18:48.912680 sudo[2827]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 6 00:18:48.912983 sudo[2827]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 6 00:18:49.361328 sudo[2827]: pam_unix(sudo:session): session closed for user root Sep 6 00:18:49.458732 kubelet[2795]: I0906 00:18:49.458674 2795 apiserver.go:52] "Watching apiserver" Sep 6 00:18:49.494740 kubelet[2795]: I0906 00:18:49.494661 2795 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 6 00:18:49.579560 kubelet[2795]: I0906 00:18:49.579306 2795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-5-n-5ce2877658" podStartSLOduration=1.579284122 podStartE2EDuration="1.579284122s" podCreationTimestamp="2025-09-06 00:18:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:18:49.578203798 +0000 UTC m=+1.183850557" watchObservedRunningTime="2025-09-06 00:18:49.579284122 +0000 UTC m=+1.184930921" Sep 6 00:18:49.606025 kubelet[2795]: I0906 00:18:49.605895 2795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-5-n-5ce2877658" podStartSLOduration=1.6058738639999999 podStartE2EDuration="1.605873864s" podCreationTimestamp="2025-09-06 00:18:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:18:49.589682362 +0000 UTC m=+1.195329121" watchObservedRunningTime="2025-09-06 00:18:49.605873864 +0000 UTC m=+1.211520623" Sep 6 00:18:49.627995 kubelet[2795]: I0906 00:18:49.627678 2795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-5-n-5ce2877658" podStartSLOduration=1.627660867 podStartE2EDuration="1.627660867s" podCreationTimestamp="2025-09-06 00:18:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:18:49.608238953 +0000 UTC m=+1.213885752" watchObservedRunningTime="2025-09-06 00:18:49.627660867 +0000 UTC m=+1.233307626" Sep 6 00:18:51.472774 sudo[1919]: pam_unix(sudo:session): session closed for user root Sep 6 00:18:51.635177 sshd[1915]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:51.640670 systemd[1]: sshd@6-91.98.90.164:22-139.178.68.195:51076.service: Deactivated successfully. Sep 6 00:18:51.646744 systemd[1]: session-7.scope: Deactivated successfully. Sep 6 00:18:51.649713 systemd-logind[1579]: Session 7 logged out. Waiting for processes to exit. Sep 6 00:18:51.652415 systemd-logind[1579]: Removed session 7. Sep 6 00:18:54.253958 kubelet[2795]: I0906 00:18:54.253733 2795 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 6 00:18:54.254673 containerd[1598]: time="2025-09-06T00:18:54.254629909Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 6 00:18:54.256714 kubelet[2795]: I0906 00:18:54.255731 2795 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 6 00:18:54.337547 kubelet[2795]: I0906 00:18:54.337475 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c5961490-49aa-46bd-99f6-e26e4af6a67f-kube-proxy\") pod \"kube-proxy-l7nxd\" (UID: \"c5961490-49aa-46bd-99f6-e26e4af6a67f\") " pod="kube-system/kube-proxy-l7nxd" Sep 6 00:18:54.337547 kubelet[2795]: I0906 00:18:54.337544 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/957ee464-af96-4828-822e-f95cfbd5e80a-cni-path\") pod \"cilium-lxz69\" (UID: \"957ee464-af96-4828-822e-f95cfbd5e80a\") " pod="kube-system/cilium-lxz69" Sep 6 00:18:54.337547 kubelet[2795]: I0906 00:18:54.337575 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/957ee464-af96-4828-822e-f95cfbd5e80a-clustermesh-secrets\") pod \"cilium-lxz69\" (UID: \"957ee464-af96-4828-822e-f95cfbd5e80a\") " pod="kube-system/cilium-lxz69" Sep 6 00:18:54.337547 kubelet[2795]: I0906 00:18:54.337606 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/957ee464-af96-4828-822e-f95cfbd5e80a-host-proc-sys-net\") pod \"cilium-lxz69\" (UID: \"957ee464-af96-4828-822e-f95cfbd5e80a\") " pod="kube-system/cilium-lxz69" Sep 6 00:18:54.337547 kubelet[2795]: I0906 00:18:54.337631 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/957ee464-af96-4828-822e-f95cfbd5e80a-hubble-tls\") pod \"cilium-lxz69\" (UID: \"957ee464-af96-4828-822e-f95cfbd5e80a\") " pod="kube-system/cilium-lxz69" Sep 6 00:18:54.338640 kubelet[2795]: I0906 00:18:54.337660 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c5961490-49aa-46bd-99f6-e26e4af6a67f-lib-modules\") pod \"kube-proxy-l7nxd\" (UID: \"c5961490-49aa-46bd-99f6-e26e4af6a67f\") " pod="kube-system/kube-proxy-l7nxd" Sep 6 00:18:54.338640 kubelet[2795]: I0906 00:18:54.337686 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/957ee464-af96-4828-822e-f95cfbd5e80a-cilium-run\") pod \"cilium-lxz69\" (UID: \"957ee464-af96-4828-822e-f95cfbd5e80a\") " pod="kube-system/cilium-lxz69" Sep 6 00:18:54.338640 kubelet[2795]: I0906 00:18:54.337709 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/957ee464-af96-4828-822e-f95cfbd5e80a-hostproc\") pod \"cilium-lxz69\" (UID: \"957ee464-af96-4828-822e-f95cfbd5e80a\") " pod="kube-system/cilium-lxz69" Sep 6 00:18:54.338640 kubelet[2795]: I0906 00:18:54.337734 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/957ee464-af96-4828-822e-f95cfbd5e80a-cilium-cgroup\") pod \"cilium-lxz69\" (UID: \"957ee464-af96-4828-822e-f95cfbd5e80a\") " pod="kube-system/cilium-lxz69" Sep 6 00:18:54.338640 kubelet[2795]: I0906 00:18:54.337757 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/957ee464-af96-4828-822e-f95cfbd5e80a-host-proc-sys-kernel\") pod \"cilium-lxz69\" (UID: \"957ee464-af96-4828-822e-f95cfbd5e80a\") " pod="kube-system/cilium-lxz69" Sep 6 00:18:54.338640 kubelet[2795]: I0906 00:18:54.337795 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g84zx\" (UniqueName: \"kubernetes.io/projected/c5961490-49aa-46bd-99f6-e26e4af6a67f-kube-api-access-g84zx\") pod \"kube-proxy-l7nxd\" (UID: \"c5961490-49aa-46bd-99f6-e26e4af6a67f\") " pod="kube-system/kube-proxy-l7nxd" Sep 6 00:18:54.338855 kubelet[2795]: I0906 00:18:54.337821 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/957ee464-af96-4828-822e-f95cfbd5e80a-xtables-lock\") pod \"cilium-lxz69\" (UID: \"957ee464-af96-4828-822e-f95cfbd5e80a\") " pod="kube-system/cilium-lxz69" Sep 6 00:18:54.338855 kubelet[2795]: I0906 00:18:54.337845 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/957ee464-af96-4828-822e-f95cfbd5e80a-cilium-config-path\") pod \"cilium-lxz69\" (UID: \"957ee464-af96-4828-822e-f95cfbd5e80a\") " pod="kube-system/cilium-lxz69" Sep 6 00:18:54.338855 kubelet[2795]: I0906 00:18:54.337869 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/957ee464-af96-4828-822e-f95cfbd5e80a-bpf-maps\") pod \"cilium-lxz69\" (UID: \"957ee464-af96-4828-822e-f95cfbd5e80a\") " pod="kube-system/cilium-lxz69" Sep 6 00:18:54.338855 kubelet[2795]: I0906 00:18:54.337895 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/957ee464-af96-4828-822e-f95cfbd5e80a-etc-cni-netd\") pod \"cilium-lxz69\" (UID: \"957ee464-af96-4828-822e-f95cfbd5e80a\") " pod="kube-system/cilium-lxz69" Sep 6 00:18:54.338855 kubelet[2795]: I0906 00:18:54.337921 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c5961490-49aa-46bd-99f6-e26e4af6a67f-xtables-lock\") pod \"kube-proxy-l7nxd\" (UID: \"c5961490-49aa-46bd-99f6-e26e4af6a67f\") " pod="kube-system/kube-proxy-l7nxd" Sep 6 00:18:54.338855 kubelet[2795]: I0906 00:18:54.337945 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/957ee464-af96-4828-822e-f95cfbd5e80a-lib-modules\") pod \"cilium-lxz69\" (UID: \"957ee464-af96-4828-822e-f95cfbd5e80a\") " pod="kube-system/cilium-lxz69" Sep 6 00:18:54.339050 kubelet[2795]: I0906 00:18:54.337990 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p92h2\" (UniqueName: \"kubernetes.io/projected/957ee464-af96-4828-822e-f95cfbd5e80a-kube-api-access-p92h2\") pod \"cilium-lxz69\" (UID: \"957ee464-af96-4828-822e-f95cfbd5e80a\") " pod="kube-system/cilium-lxz69" Sep 6 00:18:54.463312 kubelet[2795]: E0906 00:18:54.462339 2795 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 6 00:18:54.463312 kubelet[2795]: E0906 00:18:54.462370 2795 projected.go:194] Error preparing data for projected volume kube-api-access-p92h2 for pod kube-system/cilium-lxz69: configmap "kube-root-ca.crt" not found Sep 6 00:18:54.463312 kubelet[2795]: E0906 00:18:54.462452 2795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/957ee464-af96-4828-822e-f95cfbd5e80a-kube-api-access-p92h2 podName:957ee464-af96-4828-822e-f95cfbd5e80a nodeName:}" failed. No retries permitted until 2025-09-06 00:18:54.962406284 +0000 UTC m=+6.568053003 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-p92h2" (UniqueName: "kubernetes.io/projected/957ee464-af96-4828-822e-f95cfbd5e80a-kube-api-access-p92h2") pod "cilium-lxz69" (UID: "957ee464-af96-4828-822e-f95cfbd5e80a") : configmap "kube-root-ca.crt" not found Sep 6 00:18:54.463312 kubelet[2795]: E0906 00:18:54.463228 2795 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 6 00:18:54.463312 kubelet[2795]: E0906 00:18:54.463247 2795 projected.go:194] Error preparing data for projected volume kube-api-access-g84zx for pod kube-system/kube-proxy-l7nxd: configmap "kube-root-ca.crt" not found Sep 6 00:18:54.463312 kubelet[2795]: E0906 00:18:54.463290 2795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5961490-49aa-46bd-99f6-e26e4af6a67f-kube-api-access-g84zx podName:c5961490-49aa-46bd-99f6-e26e4af6a67f nodeName:}" failed. No retries permitted until 2025-09-06 00:18:54.963275526 +0000 UTC m=+6.568922245 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-g84zx" (UniqueName: "kubernetes.io/projected/c5961490-49aa-46bd-99f6-e26e4af6a67f-kube-api-access-g84zx") pod "kube-proxy-l7nxd" (UID: "c5961490-49aa-46bd-99f6-e26e4af6a67f") : configmap "kube-root-ca.crt" not found Sep 6 00:18:55.140187 containerd[1598]: time="2025-09-06T00:18:55.140131255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l7nxd,Uid:c5961490-49aa-46bd-99f6-e26e4af6a67f,Namespace:kube-system,Attempt:0,}" Sep 6 00:18:55.171764 containerd[1598]: time="2025-09-06T00:18:55.171145175Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:18:55.171764 containerd[1598]: time="2025-09-06T00:18:55.171226815Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:18:55.171764 containerd[1598]: time="2025-09-06T00:18:55.171270935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:18:55.173541 containerd[1598]: time="2025-09-06T00:18:55.172273738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:18:55.176468 containerd[1598]: time="2025-09-06T00:18:55.175641067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lxz69,Uid:957ee464-af96-4828-822e-f95cfbd5e80a,Namespace:kube-system,Attempt:0,}" Sep 6 00:18:55.216022 containerd[1598]: time="2025-09-06T00:18:55.215229889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:18:55.216022 containerd[1598]: time="2025-09-06T00:18:55.215318209Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:18:55.216022 containerd[1598]: time="2025-09-06T00:18:55.215334610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:18:55.216022 containerd[1598]: time="2025-09-06T00:18:55.215511770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:18:55.240111 containerd[1598]: time="2025-09-06T00:18:55.239978593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l7nxd,Uid:c5961490-49aa-46bd-99f6-e26e4af6a67f,Namespace:kube-system,Attempt:0,} returns sandbox id \"05a374c5c851add797e0ee8ff176156f2ade0541656306071a8347281354581a\"" Sep 6 00:18:55.246069 containerd[1598]: time="2025-09-06T00:18:55.245859689Z" level=info msg="CreateContainer within sandbox \"05a374c5c851add797e0ee8ff176156f2ade0541656306071a8347281354581a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 6 00:18:55.263735 containerd[1598]: time="2025-09-06T00:18:55.262863573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lxz69,Uid:957ee464-af96-4828-822e-f95cfbd5e80a,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d49fe611e6de1eb18bc193b88a91790685289eacfae8c05054382edf28887f1\"" Sep 6 00:18:55.269161 containerd[1598]: time="2025-09-06T00:18:55.269094909Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 6 00:18:55.276780 containerd[1598]: time="2025-09-06T00:18:55.276641129Z" level=info msg="CreateContainer within sandbox \"05a374c5c851add797e0ee8ff176156f2ade0541656306071a8347281354581a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"27dad1382962c87b7320194ae81b3b4fc79d69ef445fb4d1795c5bc84156015a\"" Sep 6 00:18:55.277634 containerd[1598]: time="2025-09-06T00:18:55.277604211Z" level=info msg="StartContainer for \"27dad1382962c87b7320194ae81b3b4fc79d69ef445fb4d1795c5bc84156015a\"" Sep 6 00:18:55.399902 containerd[1598]: time="2025-09-06T00:18:55.399716088Z" level=info msg="StartContainer for \"27dad1382962c87b7320194ae81b3b4fc79d69ef445fb4d1795c5bc84156015a\" returns successfully" Sep 6 00:18:55.450024 kubelet[2795]: I0906 00:18:55.449916 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txqn7\" (UniqueName: \"kubernetes.io/projected/fc515cd1-4852-4280-952d-fcf70beef69a-kube-api-access-txqn7\") pod \"cilium-operator-5d85765b45-fz2wn\" (UID: \"fc515cd1-4852-4280-952d-fcf70beef69a\") " pod="kube-system/cilium-operator-5d85765b45-fz2wn" Sep 6 00:18:55.450024 kubelet[2795]: I0906 00:18:55.449966 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fc515cd1-4852-4280-952d-fcf70beef69a-cilium-config-path\") pod \"cilium-operator-5d85765b45-fz2wn\" (UID: \"fc515cd1-4852-4280-952d-fcf70beef69a\") " pod="kube-system/cilium-operator-5d85765b45-fz2wn" Sep 6 00:18:55.605133 kubelet[2795]: I0906 00:18:55.603846 2795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-l7nxd" podStartSLOduration=1.603825697 podStartE2EDuration="1.603825697s" podCreationTimestamp="2025-09-06 00:18:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:18:55.591226224 +0000 UTC m=+7.196872983" watchObservedRunningTime="2025-09-06 00:18:55.603825697 +0000 UTC m=+7.209472416" Sep 6 00:18:55.641722 containerd[1598]: time="2025-09-06T00:18:55.641326474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-fz2wn,Uid:fc515cd1-4852-4280-952d-fcf70beef69a,Namespace:kube-system,Attempt:0,}" Sep 6 00:18:55.667596 containerd[1598]: time="2025-09-06T00:18:55.667145581Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:18:55.667596 containerd[1598]: time="2025-09-06T00:18:55.667508062Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:18:55.667596 containerd[1598]: time="2025-09-06T00:18:55.667558382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:18:55.667756 containerd[1598]: time="2025-09-06T00:18:55.667693583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:18:55.714310 containerd[1598]: time="2025-09-06T00:18:55.714253463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-fz2wn,Uid:fc515cd1-4852-4280-952d-fcf70beef69a,Namespace:kube-system,Attempt:0,} returns sandbox id \"8654e49ce706187b8e79bbf5b270020bf90197153a0a757fb28b67b39fae63b5\"" Sep 6 00:18:59.251816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount568535658.mount: Deactivated successfully. Sep 6 00:19:00.496639 containerd[1598]: time="2025-09-06T00:19:00.496575508Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:19:00.499113 containerd[1598]: time="2025-09-06T00:19:00.499046593Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 6 00:19:00.502814 containerd[1598]: time="2025-09-06T00:19:00.502778880Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:19:00.504607 containerd[1598]: time="2025-09-06T00:19:00.504567323Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.235421054s" Sep 6 00:19:00.504822 containerd[1598]: time="2025-09-06T00:19:00.504733523Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 6 00:19:00.506882 containerd[1598]: time="2025-09-06T00:19:00.506834167Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 6 00:19:00.509380 containerd[1598]: time="2025-09-06T00:19:00.509283012Z" level=info msg="CreateContainer within sandbox \"1d49fe611e6de1eb18bc193b88a91790685289eacfae8c05054382edf28887f1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:19:00.525694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount883631881.mount: Deactivated successfully. Sep 6 00:19:00.528310 containerd[1598]: time="2025-09-06T00:19:00.528090327Z" level=info msg="CreateContainer within sandbox \"1d49fe611e6de1eb18bc193b88a91790685289eacfae8c05054382edf28887f1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6e66f9c3cd295ca10e152dc008a53aa9ae6d1714067fa3fec2aefcebf1d04007\"" Sep 6 00:19:00.531993 containerd[1598]: time="2025-09-06T00:19:00.529068449Z" level=info msg="StartContainer for \"6e66f9c3cd295ca10e152dc008a53aa9ae6d1714067fa3fec2aefcebf1d04007\"" Sep 6 00:19:00.565670 systemd[1]: run-containerd-runc-k8s.io-6e66f9c3cd295ca10e152dc008a53aa9ae6d1714067fa3fec2aefcebf1d04007-runc.ZI0wqo.mount: Deactivated successfully. Sep 6 00:19:00.597837 containerd[1598]: time="2025-09-06T00:19:00.597694178Z" level=info msg="StartContainer for \"6e66f9c3cd295ca10e152dc008a53aa9ae6d1714067fa3fec2aefcebf1d04007\" returns successfully" Sep 6 00:19:00.804683 containerd[1598]: time="2025-09-06T00:19:00.804517166Z" level=info msg="shim disconnected" id=6e66f9c3cd295ca10e152dc008a53aa9ae6d1714067fa3fec2aefcebf1d04007 namespace=k8s.io Sep 6 00:19:00.804683 containerd[1598]: time="2025-09-06T00:19:00.804596447Z" level=warning msg="cleaning up after shim disconnected" id=6e66f9c3cd295ca10e152dc008a53aa9ae6d1714067fa3fec2aefcebf1d04007 namespace=k8s.io Sep 6 00:19:00.804683 containerd[1598]: time="2025-09-06T00:19:00.804610007Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:19:01.523994 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e66f9c3cd295ca10e152dc008a53aa9ae6d1714067fa3fec2aefcebf1d04007-rootfs.mount: Deactivated successfully. Sep 6 00:19:01.609130 containerd[1598]: time="2025-09-06T00:19:01.608474405Z" level=info msg="CreateContainer within sandbox \"1d49fe611e6de1eb18bc193b88a91790685289eacfae8c05054382edf28887f1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 00:19:01.660560 containerd[1598]: time="2025-09-06T00:19:01.660510217Z" level=info msg="CreateContainer within sandbox \"1d49fe611e6de1eb18bc193b88a91790685289eacfae8c05054382edf28887f1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"382f84517302f78fa712ba84e85a19623392cb8afbdf1d5c296971e71701df4a\"" Sep 6 00:19:01.661878 containerd[1598]: time="2025-09-06T00:19:01.661838619Z" level=info msg="StartContainer for \"382f84517302f78fa712ba84e85a19623392cb8afbdf1d5c296971e71701df4a\"" Sep 6 00:19:01.721141 containerd[1598]: time="2025-09-06T00:19:01.721068523Z" level=info msg="StartContainer for \"382f84517302f78fa712ba84e85a19623392cb8afbdf1d5c296971e71701df4a\" returns successfully" Sep 6 00:19:01.735227 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 00:19:01.735595 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 6 00:19:01.735663 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 6 00:19:01.742329 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 6 00:19:01.768953 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 6 00:19:01.773578 containerd[1598]: time="2025-09-06T00:19:01.773504376Z" level=info msg="shim disconnected" id=382f84517302f78fa712ba84e85a19623392cb8afbdf1d5c296971e71701df4a namespace=k8s.io Sep 6 00:19:01.773721 containerd[1598]: time="2025-09-06T00:19:01.773575896Z" level=warning msg="cleaning up after shim disconnected" id=382f84517302f78fa712ba84e85a19623392cb8afbdf1d5c296971e71701df4a namespace=k8s.io Sep 6 00:19:01.773721 containerd[1598]: time="2025-09-06T00:19:01.773601256Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:19:02.351411 containerd[1598]: time="2025-09-06T00:19:02.351298029Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:19:02.352774 containerd[1598]: time="2025-09-06T00:19:02.352714391Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 6 00:19:02.354045 containerd[1598]: time="2025-09-06T00:19:02.353956592Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:19:02.358380 containerd[1598]: time="2025-09-06T00:19:02.358270196Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.851384469s" Sep 6 00:19:02.358380 containerd[1598]: time="2025-09-06T00:19:02.358321196Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 6 00:19:02.361868 containerd[1598]: time="2025-09-06T00:19:02.361742999Z" level=info msg="CreateContainer within sandbox \"8654e49ce706187b8e79bbf5b270020bf90197153a0a757fb28b67b39fae63b5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 6 00:19:02.382856 containerd[1598]: time="2025-09-06T00:19:02.382769059Z" level=info msg="CreateContainer within sandbox \"8654e49ce706187b8e79bbf5b270020bf90197153a0a757fb28b67b39fae63b5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"dd114369e60b34d3cb6b57c77d0a846bb1c2f055dd068dcd99c002adcf2e1f6a\"" Sep 6 00:19:02.384512 containerd[1598]: time="2025-09-06T00:19:02.384456980Z" level=info msg="StartContainer for \"dd114369e60b34d3cb6b57c77d0a846bb1c2f055dd068dcd99c002adcf2e1f6a\"" Sep 6 00:19:02.434624 containerd[1598]: time="2025-09-06T00:19:02.434513667Z" level=info msg="StartContainer for \"dd114369e60b34d3cb6b57c77d0a846bb1c2f055dd068dcd99c002adcf2e1f6a\" returns successfully" Sep 6 00:19:02.526546 systemd[1]: run-containerd-runc-k8s.io-382f84517302f78fa712ba84e85a19623392cb8afbdf1d5c296971e71701df4a-runc.iuFHCL.mount: Deactivated successfully. Sep 6 00:19:02.526698 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-382f84517302f78fa712ba84e85a19623392cb8afbdf1d5c296971e71701df4a-rootfs.mount: Deactivated successfully. Sep 6 00:19:02.618710 containerd[1598]: time="2025-09-06T00:19:02.618571720Z" level=info msg="CreateContainer within sandbox \"1d49fe611e6de1eb18bc193b88a91790685289eacfae8c05054382edf28887f1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 00:19:02.661498 containerd[1598]: time="2025-09-06T00:19:02.659405678Z" level=info msg="CreateContainer within sandbox \"1d49fe611e6de1eb18bc193b88a91790685289eacfae8c05054382edf28887f1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"dc473666165dc0e69c5cf112b4b9ff518e24246d0b6a76a4cde1a5ee01d72cd2\"" Sep 6 00:19:02.661686 containerd[1598]: time="2025-09-06T00:19:02.661645400Z" level=info msg="StartContainer for \"dc473666165dc0e69c5cf112b4b9ff518e24246d0b6a76a4cde1a5ee01d72cd2\"" Sep 6 00:19:02.801774 containerd[1598]: time="2025-09-06T00:19:02.801706291Z" level=info msg="StartContainer for \"dc473666165dc0e69c5cf112b4b9ff518e24246d0b6a76a4cde1a5ee01d72cd2\" returns successfully" Sep 6 00:19:02.898992 containerd[1598]: time="2025-09-06T00:19:02.898711062Z" level=info msg="shim disconnected" id=dc473666165dc0e69c5cf112b4b9ff518e24246d0b6a76a4cde1a5ee01d72cd2 namespace=k8s.io Sep 6 00:19:02.898992 containerd[1598]: time="2025-09-06T00:19:02.898765742Z" level=warning msg="cleaning up after shim disconnected" id=dc473666165dc0e69c5cf112b4b9ff518e24246d0b6a76a4cde1a5ee01d72cd2 namespace=k8s.io Sep 6 00:19:02.898992 containerd[1598]: time="2025-09-06T00:19:02.898774102Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:19:03.523724 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc473666165dc0e69c5cf112b4b9ff518e24246d0b6a76a4cde1a5ee01d72cd2-rootfs.mount: Deactivated successfully. Sep 6 00:19:03.624589 containerd[1598]: time="2025-09-06T00:19:03.624004508Z" level=info msg="CreateContainer within sandbox \"1d49fe611e6de1eb18bc193b88a91790685289eacfae8c05054382edf28887f1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 00:19:03.643984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1099164109.mount: Deactivated successfully. Sep 6 00:19:03.648032 containerd[1598]: time="2025-09-06T00:19:03.647595477Z" level=info msg="CreateContainer within sandbox \"1d49fe611e6de1eb18bc193b88a91790685289eacfae8c05054382edf28887f1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7868790d6d6be19f5228363992f54f8a8903cf7da84d89412240b9e5acc2d152\"" Sep 6 00:19:03.654623 containerd[1598]: time="2025-09-06T00:19:03.649549784Z" level=info msg="StartContainer for \"7868790d6d6be19f5228363992f54f8a8903cf7da84d89412240b9e5acc2d152\"" Sep 6 00:19:03.656821 kubelet[2795]: I0906 00:19:03.656768 2795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-fz2wn" podStartSLOduration=2.013669089 podStartE2EDuration="8.656751538s" podCreationTimestamp="2025-09-06 00:18:55 +0000 UTC" firstStartedPulling="2025-09-06 00:18:55.716073748 +0000 UTC m=+7.321720507" lastFinishedPulling="2025-09-06 00:19:02.359156197 +0000 UTC m=+13.964802956" observedRunningTime="2025-09-06 00:19:02.729455263 +0000 UTC m=+14.335102022" watchObservedRunningTime="2025-09-06 00:19:03.656751538 +0000 UTC m=+15.262398297" Sep 6 00:19:03.752563 containerd[1598]: time="2025-09-06T00:19:03.752508202Z" level=info msg="StartContainer for \"7868790d6d6be19f5228363992f54f8a8903cf7da84d89412240b9e5acc2d152\" returns successfully" Sep 6 00:19:03.773686 containerd[1598]: time="2025-09-06T00:19:03.773595787Z" level=info msg="shim disconnected" id=7868790d6d6be19f5228363992f54f8a8903cf7da84d89412240b9e5acc2d152 namespace=k8s.io Sep 6 00:19:03.773686 containerd[1598]: time="2025-09-06T00:19:03.773673106Z" level=warning msg="cleaning up after shim disconnected" id=7868790d6d6be19f5228363992f54f8a8903cf7da84d89412240b9e5acc2d152 namespace=k8s.io Sep 6 00:19:03.773686 containerd[1598]: time="2025-09-06T00:19:03.773682426Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:19:04.522612 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7868790d6d6be19f5228363992f54f8a8903cf7da84d89412240b9e5acc2d152-rootfs.mount: Deactivated successfully. Sep 6 00:19:04.631272 containerd[1598]: time="2025-09-06T00:19:04.631207505Z" level=info msg="CreateContainer within sandbox \"1d49fe611e6de1eb18bc193b88a91790685289eacfae8c05054382edf28887f1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 00:19:04.664524 containerd[1598]: time="2025-09-06T00:19:04.664454298Z" level=info msg="CreateContainer within sandbox \"1d49fe611e6de1eb18bc193b88a91790685289eacfae8c05054382edf28887f1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"021d2982334ec8519e0a1e415c1689e1026b33aee9c13f27661eec58e4784c7c\"" Sep 6 00:19:04.665473 containerd[1598]: time="2025-09-06T00:19:04.665411092Z" level=info msg="StartContainer for \"021d2982334ec8519e0a1e415c1689e1026b33aee9c13f27661eec58e4784c7c\"" Sep 6 00:19:04.729087 containerd[1598]: time="2025-09-06T00:19:04.729022134Z" level=info msg="StartContainer for \"021d2982334ec8519e0a1e415c1689e1026b33aee9c13f27661eec58e4784c7c\" returns successfully" Sep 6 00:19:04.883051 kubelet[2795]: I0906 00:19:04.882925 2795 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 6 00:19:05.024577 kubelet[2795]: I0906 00:19:05.024372 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cwf8\" (UniqueName: \"kubernetes.io/projected/524383e9-3fc8-4d1c-ba6c-ff210994fb58-kube-api-access-2cwf8\") pod \"coredns-7c65d6cfc9-7kwmv\" (UID: \"524383e9-3fc8-4d1c-ba6c-ff210994fb58\") " pod="kube-system/coredns-7c65d6cfc9-7kwmv" Sep 6 00:19:05.024577 kubelet[2795]: I0906 00:19:05.024444 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/993072f2-e5c1-4a0d-909f-a37f366c6223-config-volume\") pod \"coredns-7c65d6cfc9-c54d2\" (UID: \"993072f2-e5c1-4a0d-909f-a37f366c6223\") " pod="kube-system/coredns-7c65d6cfc9-c54d2" Sep 6 00:19:05.024577 kubelet[2795]: I0906 00:19:05.024471 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfzvz\" (UniqueName: \"kubernetes.io/projected/993072f2-e5c1-4a0d-909f-a37f366c6223-kube-api-access-jfzvz\") pod \"coredns-7c65d6cfc9-c54d2\" (UID: \"993072f2-e5c1-4a0d-909f-a37f366c6223\") " pod="kube-system/coredns-7c65d6cfc9-c54d2" Sep 6 00:19:05.024577 kubelet[2795]: I0906 00:19:05.024489 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/524383e9-3fc8-4d1c-ba6c-ff210994fb58-config-volume\") pod \"coredns-7c65d6cfc9-7kwmv\" (UID: \"524383e9-3fc8-4d1c-ba6c-ff210994fb58\") " pod="kube-system/coredns-7c65d6cfc9-7kwmv" Sep 6 00:19:05.232240 containerd[1598]: time="2025-09-06T00:19:05.232194829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7kwmv,Uid:524383e9-3fc8-4d1c-ba6c-ff210994fb58,Namespace:kube-system,Attempt:0,}" Sep 6 00:19:05.237411 containerd[1598]: time="2025-09-06T00:19:05.237093319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-c54d2,Uid:993072f2-e5c1-4a0d-909f-a37f366c6223,Namespace:kube-system,Attempt:0,}" Sep 6 00:19:05.653315 kubelet[2795]: I0906 00:19:05.652330 2795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lxz69" podStartSLOduration=6.413049973 podStartE2EDuration="11.652311836s" podCreationTimestamp="2025-09-06 00:18:54 +0000 UTC" firstStartedPulling="2025-09-06 00:18:55.266499302 +0000 UTC m=+6.872146061" lastFinishedPulling="2025-09-06 00:19:00.505761205 +0000 UTC m=+12.111407924" observedRunningTime="2025-09-06 00:19:05.650296808 +0000 UTC m=+17.255943567" watchObservedRunningTime="2025-09-06 00:19:05.652311836 +0000 UTC m=+17.257958595" Sep 6 00:19:06.892578 systemd-networkd[1245]: cilium_host: Link UP Sep 6 00:19:06.892791 systemd-networkd[1245]: cilium_net: Link UP Sep 6 00:19:06.892915 systemd-networkd[1245]: cilium_net: Gained carrier Sep 6 00:19:06.893020 systemd-networkd[1245]: cilium_host: Gained carrier Sep 6 00:19:07.047592 systemd-networkd[1245]: cilium_vxlan: Link UP Sep 6 00:19:07.047599 systemd-networkd[1245]: cilium_vxlan: Gained carrier Sep 6 00:19:07.090610 systemd-networkd[1245]: cilium_host: Gained IPv6LL Sep 6 00:19:07.336635 kernel: NET: Registered PF_ALG protocol family Sep 6 00:19:07.602636 systemd-networkd[1245]: cilium_net: Gained IPv6LL Sep 6 00:19:08.058080 systemd-networkd[1245]: lxc_health: Link UP Sep 6 00:19:08.064336 systemd-networkd[1245]: lxc_health: Gained carrier Sep 6 00:19:08.178636 systemd-networkd[1245]: cilium_vxlan: Gained IPv6LL Sep 6 00:19:08.310712 systemd-networkd[1245]: lxc88c12ad947d4: Link UP Sep 6 00:19:08.320919 systemd-networkd[1245]: lxcf005516e610b: Link UP Sep 6 00:19:08.327490 kernel: eth0: renamed from tmp05f29 Sep 6 00:19:08.340502 kernel: eth0: renamed from tmp976af Sep 6 00:19:08.341610 systemd-networkd[1245]: lxc88c12ad947d4: Gained carrier Sep 6 00:19:08.350680 systemd-networkd[1245]: lxcf005516e610b: Gained carrier Sep 6 00:19:09.714841 systemd-networkd[1245]: lxc_health: Gained IPv6LL Sep 6 00:19:10.035699 systemd-networkd[1245]: lxc88c12ad947d4: Gained IPv6LL Sep 6 00:19:10.356574 systemd-networkd[1245]: lxcf005516e610b: Gained IPv6LL Sep 6 00:19:12.506493 containerd[1598]: time="2025-09-06T00:19:12.505035670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:19:12.506493 containerd[1598]: time="2025-09-06T00:19:12.505092509Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:19:12.506493 containerd[1598]: time="2025-09-06T00:19:12.505107589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:19:12.506493 containerd[1598]: time="2025-09-06T00:19:12.505680506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:19:12.509959 containerd[1598]: time="2025-09-06T00:19:12.503471078Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:19:12.509959 containerd[1598]: time="2025-09-06T00:19:12.503542237Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:19:12.509959 containerd[1598]: time="2025-09-06T00:19:12.503566197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:19:12.509959 containerd[1598]: time="2025-09-06T00:19:12.503726596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:19:12.550006 systemd[1]: run-containerd-runc-k8s.io-05f290969c91ce99b27e5aebae7c241a6698df90c3ab2f130e3fcd993aaf4802-runc.00dWC1.mount: Deactivated successfully. Sep 6 00:19:12.603125 containerd[1598]: time="2025-09-06T00:19:12.603062459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7kwmv,Uid:524383e9-3fc8-4d1c-ba6c-ff210994fb58,Namespace:kube-system,Attempt:0,} returns sandbox id \"05f290969c91ce99b27e5aebae7c241a6698df90c3ab2f130e3fcd993aaf4802\"" Sep 6 00:19:12.609341 containerd[1598]: time="2025-09-06T00:19:12.608611271Z" level=info msg="CreateContainer within sandbox \"05f290969c91ce99b27e5aebae7c241a6698df90c3ab2f130e3fcd993aaf4802\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 00:19:12.639393 containerd[1598]: time="2025-09-06T00:19:12.639343357Z" level=info msg="CreateContainer within sandbox \"05f290969c91ce99b27e5aebae7c241a6698df90c3ab2f130e3fcd993aaf4802\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"89023536b2fc7c5f3e710309e7717ddb7c4abb3132f5294f654b1783978a9e71\"" Sep 6 00:19:12.642443 containerd[1598]: time="2025-09-06T00:19:12.640307272Z" level=info msg="StartContainer for \"89023536b2fc7c5f3e710309e7717ddb7c4abb3132f5294f654b1783978a9e71\"" Sep 6 00:19:12.667617 containerd[1598]: time="2025-09-06T00:19:12.667577416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-c54d2,Uid:993072f2-e5c1-4a0d-909f-a37f366c6223,Namespace:kube-system,Attempt:0,} returns sandbox id \"976afc3fb1c87be7b8a0d4dc94dd9e5385d3f7e92a4e05fb88073b5e08d6fffc\"" Sep 6 00:19:12.671796 containerd[1598]: time="2025-09-06T00:19:12.671750955Z" level=info msg="CreateContainer within sandbox \"976afc3fb1c87be7b8a0d4dc94dd9e5385d3f7e92a4e05fb88073b5e08d6fffc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 00:19:12.691634 containerd[1598]: time="2025-09-06T00:19:12.691595775Z" level=info msg="CreateContainer within sandbox \"976afc3fb1c87be7b8a0d4dc94dd9e5385d3f7e92a4e05fb88073b5e08d6fffc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"737a4a3e933ce4ceb0a18596b0b0c848ffa4e56f001c968727e33de1c0bfbe97\"" Sep 6 00:19:12.692922 containerd[1598]: time="2025-09-06T00:19:12.692818249Z" level=info msg="StartContainer for \"737a4a3e933ce4ceb0a18596b0b0c848ffa4e56f001c968727e33de1c0bfbe97\"" Sep 6 00:19:12.730451 containerd[1598]: time="2025-09-06T00:19:12.730281422Z" level=info msg="StartContainer for \"89023536b2fc7c5f3e710309e7717ddb7c4abb3132f5294f654b1783978a9e71\" returns successfully" Sep 6 00:19:12.758088 containerd[1598]: time="2025-09-06T00:19:12.757970443Z" level=info msg="StartContainer for \"737a4a3e933ce4ceb0a18596b0b0c848ffa4e56f001c968727e33de1c0bfbe97\" returns successfully" Sep 6 00:19:13.693848 kubelet[2795]: I0906 00:19:13.693553 2795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-c54d2" podStartSLOduration=18.69352777 podStartE2EDuration="18.69352777s" podCreationTimestamp="2025-09-06 00:18:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:19:13.689927907 +0000 UTC m=+25.295574666" watchObservedRunningTime="2025-09-06 00:19:13.69352777 +0000 UTC m=+25.299174529" Sep 6 00:19:16.873176 kubelet[2795]: I0906 00:19:16.872282 2795 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 6 00:19:16.900458 kubelet[2795]: I0906 00:19:16.898205 2795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-7kwmv" podStartSLOduration=21.898179834 podStartE2EDuration="21.898179834s" podCreationTimestamp="2025-09-06 00:18:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:19:13.743835564 +0000 UTC m=+25.349482323" watchObservedRunningTime="2025-09-06 00:19:16.898179834 +0000 UTC m=+28.503826593" Sep 6 00:20:25.361565 update_engine[1585]: I20250906 00:20:25.361275 1585 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 6 00:20:25.361565 update_engine[1585]: I20250906 00:20:25.361330 1585 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 6 00:20:25.362189 update_engine[1585]: I20250906 00:20:25.361613 1585 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 6 00:20:25.362189 update_engine[1585]: I20250906 00:20:25.361997 1585 omaha_request_params.cc:62] Current group set to lts Sep 6 00:20:25.362258 update_engine[1585]: I20250906 00:20:25.362192 1585 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 6 00:20:25.362258 update_engine[1585]: I20250906 00:20:25.362207 1585 update_attempter.cc:643] Scheduling an action processor start. Sep 6 00:20:25.362258 update_engine[1585]: I20250906 00:20:25.362224 1585 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 6 00:20:25.362348 update_engine[1585]: I20250906 00:20:25.362269 1585 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 6 00:20:25.362348 update_engine[1585]: I20250906 00:20:25.362326 1585 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 6 00:20:25.362348 update_engine[1585]: I20250906 00:20:25.362335 1585 omaha_request_action.cc:272] Request: Sep 6 00:20:25.362348 update_engine[1585]: Sep 6 00:20:25.362348 update_engine[1585]: Sep 6 00:20:25.362348 update_engine[1585]: Sep 6 00:20:25.362348 update_engine[1585]: Sep 6 00:20:25.362348 update_engine[1585]: Sep 6 00:20:25.362348 update_engine[1585]: Sep 6 00:20:25.362348 update_engine[1585]: Sep 6 00:20:25.362348 update_engine[1585]: Sep 6 00:20:25.362348 update_engine[1585]: I20250906 00:20:25.362342 1585 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 6 00:20:25.363180 locksmithd[1621]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 6 00:20:25.364757 update_engine[1585]: I20250906 00:20:25.364548 1585 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 6 00:20:25.364958 update_engine[1585]: I20250906 00:20:25.364889 1585 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 6 00:20:25.367906 update_engine[1585]: E20250906 00:20:25.367844 1585 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 6 00:20:25.368014 update_engine[1585]: I20250906 00:20:25.367942 1585 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 6 00:20:35.286115 update_engine[1585]: I20250906 00:20:35.285956 1585 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 6 00:20:35.286909 update_engine[1585]: I20250906 00:20:35.286393 1585 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 6 00:20:35.286909 update_engine[1585]: I20250906 00:20:35.286774 1585 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 6 00:20:35.287939 update_engine[1585]: E20250906 00:20:35.287859 1585 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 6 00:20:35.288101 update_engine[1585]: I20250906 00:20:35.288002 1585 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 6 00:20:45.286560 update_engine[1585]: I20250906 00:20:45.286110 1585 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 6 00:20:45.287776 update_engine[1585]: I20250906 00:20:45.287245 1585 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 6 00:20:45.287776 update_engine[1585]: I20250906 00:20:45.287611 1585 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 6 00:20:45.288939 update_engine[1585]: E20250906 00:20:45.288784 1585 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 6 00:20:45.288939 update_engine[1585]: I20250906 00:20:45.288889 1585 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 6 00:20:55.283137 update_engine[1585]: I20250906 00:20:55.283016 1585 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 6 00:20:55.283584 update_engine[1585]: I20250906 00:20:55.283457 1585 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 6 00:20:55.283861 update_engine[1585]: I20250906 00:20:55.283800 1585 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 6 00:20:55.284992 update_engine[1585]: E20250906 00:20:55.284802 1585 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 6 00:20:55.284992 update_engine[1585]: I20250906 00:20:55.284900 1585 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 6 00:20:55.284992 update_engine[1585]: I20250906 00:20:55.284919 1585 omaha_request_action.cc:617] Omaha request response: Sep 6 00:20:55.285208 update_engine[1585]: E20250906 00:20:55.285038 1585 omaha_request_action.cc:636] Omaha request network transfer failed. Sep 6 00:20:55.285208 update_engine[1585]: I20250906 00:20:55.285066 1585 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 6 00:20:55.285208 update_engine[1585]: I20250906 00:20:55.285077 1585 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 6 00:20:55.285208 update_engine[1585]: I20250906 00:20:55.285087 1585 update_attempter.cc:306] Processing Done. Sep 6 00:20:55.285208 update_engine[1585]: E20250906 00:20:55.285108 1585 update_attempter.cc:619] Update failed. Sep 6 00:20:55.285208 update_engine[1585]: I20250906 00:20:55.285118 1585 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 6 00:20:55.285208 update_engine[1585]: I20250906 00:20:55.285128 1585 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 6 00:20:55.285208 update_engine[1585]: I20250906 00:20:55.285138 1585 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 6 00:20:55.285576 update_engine[1585]: I20250906 00:20:55.285455 1585 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 6 00:20:55.285576 update_engine[1585]: I20250906 00:20:55.285506 1585 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 6 00:20:55.285576 update_engine[1585]: I20250906 00:20:55.285518 1585 omaha_request_action.cc:272] Request: Sep 6 00:20:55.285576 update_engine[1585]: Sep 6 00:20:55.285576 update_engine[1585]: Sep 6 00:20:55.285576 update_engine[1585]: Sep 6 00:20:55.285576 update_engine[1585]: Sep 6 00:20:55.285576 update_engine[1585]: Sep 6 00:20:55.285576 update_engine[1585]: Sep 6 00:20:55.285576 update_engine[1585]: I20250906 00:20:55.285531 1585 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 6 00:20:55.286002 update_engine[1585]: I20250906 00:20:55.285848 1585 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 6 00:20:55.286287 update_engine[1585]: I20250906 00:20:55.286115 1585 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 6 00:20:55.286358 locksmithd[1621]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 6 00:20:55.286948 update_engine[1585]: E20250906 00:20:55.286895 1585 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 6 00:20:55.287003 update_engine[1585]: I20250906 00:20:55.286969 1585 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 6 00:20:55.287003 update_engine[1585]: I20250906 00:20:55.286983 1585 omaha_request_action.cc:617] Omaha request response: Sep 6 00:20:55.287003 update_engine[1585]: I20250906 00:20:55.286995 1585 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 6 00:20:55.287133 update_engine[1585]: I20250906 00:20:55.287005 1585 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 6 00:20:55.287133 update_engine[1585]: I20250906 00:20:55.287014 1585 update_attempter.cc:306] Processing Done. Sep 6 00:20:55.287133 update_engine[1585]: I20250906 00:20:55.287025 1585 update_attempter.cc:310] Error event sent. Sep 6 00:20:55.287133 update_engine[1585]: I20250906 00:20:55.287041 1585 update_check_scheduler.cc:74] Next update check in 45m55s Sep 6 00:20:55.287527 locksmithd[1621]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 6 00:21:14.189842 systemd[1]: Started sshd@7-91.98.90.164:22-139.178.68.195:40424.service - OpenSSH per-connection server daemon (139.178.68.195:40424). Sep 6 00:21:15.193981 sshd[4187]: Accepted publickey for core from 139.178.68.195 port 40424 ssh2: RSA SHA256:jxc91lYC6jGmo2vsfpcbx31/qXJlPFNhK53iVaWpnSg Sep 6 00:21:15.196762 sshd[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:21:15.205932 systemd-logind[1579]: New session 8 of user core. Sep 6 00:21:15.210694 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 6 00:21:15.975801 sshd[4187]: pam_unix(sshd:session): session closed for user core Sep 6 00:21:15.982433 systemd[1]: sshd@7-91.98.90.164:22-139.178.68.195:40424.service: Deactivated successfully. Sep 6 00:21:15.986798 systemd-logind[1579]: Session 8 logged out. Waiting for processes to exit. Sep 6 00:21:15.987415 systemd[1]: session-8.scope: Deactivated successfully. Sep 6 00:21:15.989971 systemd-logind[1579]: Removed session 8. Sep 6 00:21:21.164799 systemd[1]: Started sshd@8-91.98.90.164:22-139.178.68.195:58594.service - OpenSSH per-connection server daemon (139.178.68.195:58594). Sep 6 00:21:22.220692 sshd[4202]: Accepted publickey for core from 139.178.68.195 port 58594 ssh2: RSA SHA256:jxc91lYC6jGmo2vsfpcbx31/qXJlPFNhK53iVaWpnSg Sep 6 00:21:22.223124 sshd[4202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:21:22.232501 systemd-logind[1579]: New session 9 of user core. Sep 6 00:21:22.235778 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 6 00:21:23.051719 sshd[4202]: pam_unix(sshd:session): session closed for user core Sep 6 00:21:23.063694 systemd[1]: sshd@8-91.98.90.164:22-139.178.68.195:58594.service: Deactivated successfully. Sep 6 00:21:23.071965 systemd[1]: session-9.scope: Deactivated successfully. Sep 6 00:21:23.073444 systemd-logind[1579]: Session 9 logged out. Waiting for processes to exit. Sep 6 00:21:23.078479 systemd-logind[1579]: Removed session 9. Sep 6 00:21:28.215843 systemd[1]: Started sshd@9-91.98.90.164:22-139.178.68.195:58602.service - OpenSSH per-connection server daemon (139.178.68.195:58602). Sep 6 00:21:29.209826 sshd[4219]: Accepted publickey for core from 139.178.68.195 port 58602 ssh2: RSA SHA256:jxc91lYC6jGmo2vsfpcbx31/qXJlPFNhK53iVaWpnSg Sep 6 00:21:29.210580 sshd[4219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:21:29.221339 systemd-logind[1579]: New session 10 of user core. Sep 6 00:21:29.225863 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 6 00:21:29.989203 sshd[4219]: pam_unix(sshd:session): session closed for user core Sep 6 00:21:29.995808 systemd[1]: sshd@9-91.98.90.164:22-139.178.68.195:58602.service: Deactivated successfully. Sep 6 00:21:30.001990 systemd[1]: session-10.scope: Deactivated successfully. Sep 6 00:21:30.005642 systemd-logind[1579]: Session 10 logged out. Waiting for processes to exit. Sep 6 00:21:30.007170 systemd-logind[1579]: Removed session 10. Sep 6 00:21:30.162040 systemd[1]: Started sshd@10-91.98.90.164:22-139.178.68.195:59464.service - OpenSSH per-connection server daemon (139.178.68.195:59464). Sep 6 00:21:31.160684 sshd[4233]: Accepted publickey for core from 139.178.68.195 port 59464 ssh2: RSA SHA256:jxc91lYC6jGmo2vsfpcbx31/qXJlPFNhK53iVaWpnSg Sep 6 00:21:31.162767 sshd[4233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:21:31.169356 systemd-logind[1579]: New session 11 of user core. Sep 6 00:21:31.177180 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 6 00:21:31.995787 sshd[4233]: pam_unix(sshd:session): session closed for user core Sep 6 00:21:32.001625 systemd[1]: sshd@10-91.98.90.164:22-139.178.68.195:59464.service: Deactivated successfully. Sep 6 00:21:32.005384 systemd-logind[1579]: Session 11 logged out. Waiting for processes to exit. Sep 6 00:21:32.006109 systemd[1]: session-11.scope: Deactivated successfully. Sep 6 00:21:32.009669 systemd-logind[1579]: Removed session 11. Sep 6 00:21:32.163714 systemd[1]: Started sshd@11-91.98.90.164:22-139.178.68.195:59478.service - OpenSSH per-connection server daemon (139.178.68.195:59478). Sep 6 00:21:33.159234 sshd[4245]: Accepted publickey for core from 139.178.68.195 port 59478 ssh2: RSA SHA256:jxc91lYC6jGmo2vsfpcbx31/qXJlPFNhK53iVaWpnSg Sep 6 00:21:33.161801 sshd[4245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:21:33.166502 systemd-logind[1579]: New session 12 of user core. Sep 6 00:21:33.169731 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 6 00:21:33.925146 sshd[4245]: pam_unix(sshd:session): session closed for user core Sep 6 00:21:33.931642 systemd-logind[1579]: Session 12 logged out. Waiting for processes to exit. Sep 6 00:21:33.932376 systemd[1]: sshd@11-91.98.90.164:22-139.178.68.195:59478.service: Deactivated successfully. Sep 6 00:21:33.937106 systemd[1]: session-12.scope: Deactivated successfully. Sep 6 00:21:33.938434 systemd-logind[1579]: Removed session 12. Sep 6 00:21:39.114902 systemd[1]: Started sshd@12-91.98.90.164:22-139.178.68.195:59490.service - OpenSSH per-connection server daemon (139.178.68.195:59490). Sep 6 00:21:40.170055 sshd[4258]: Accepted publickey for core from 139.178.68.195 port 59490 ssh2: RSA SHA256:jxc91lYC6jGmo2vsfpcbx31/qXJlPFNhK53iVaWpnSg Sep 6 00:21:40.172977 sshd[4258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:21:40.178362 systemd-logind[1579]: New session 13 of user core. Sep 6 00:21:40.182937 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 6 00:21:40.970438 sshd[4258]: pam_unix(sshd:session): session closed for user core Sep 6 00:21:40.980021 systemd[1]: sshd@12-91.98.90.164:22-139.178.68.195:59490.service: Deactivated successfully. Sep 6 00:21:40.984585 systemd-logind[1579]: Session 13 logged out. Waiting for processes to exit. Sep 6 00:21:40.985324 systemd[1]: session-13.scope: Deactivated successfully. Sep 6 00:21:40.986756 systemd-logind[1579]: Removed session 13. Sep 6 00:21:41.128723 systemd[1]: Started sshd@13-91.98.90.164:22-139.178.68.195:59212.service - OpenSSH per-connection server daemon (139.178.68.195:59212). Sep 6 00:21:42.127649 sshd[4272]: Accepted publickey for core from 139.178.68.195 port 59212 ssh2: RSA SHA256:jxc91lYC6jGmo2vsfpcbx31/qXJlPFNhK53iVaWpnSg Sep 6 00:21:42.130230 sshd[4272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:21:42.136268 systemd-logind[1579]: New session 14 of user core. Sep 6 00:21:42.143997 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 6 00:21:42.938802 sshd[4272]: pam_unix(sshd:session): session closed for user core Sep 6 00:21:42.944962 systemd-logind[1579]: Session 14 logged out. Waiting for processes to exit. Sep 6 00:21:42.947594 systemd[1]: sshd@13-91.98.90.164:22-139.178.68.195:59212.service: Deactivated successfully. Sep 6 00:21:42.952894 systemd[1]: session-14.scope: Deactivated successfully. Sep 6 00:21:42.954775 systemd-logind[1579]: Removed session 14. Sep 6 00:21:43.117193 systemd[1]: Started sshd@14-91.98.90.164:22-139.178.68.195:59224.service - OpenSSH per-connection server daemon (139.178.68.195:59224). Sep 6 00:21:44.187055 sshd[4283]: Accepted publickey for core from 139.178.68.195 port 59224 ssh2: RSA SHA256:jxc91lYC6jGmo2vsfpcbx31/qXJlPFNhK53iVaWpnSg Sep 6 00:21:44.190352 sshd[4283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:21:44.199326 systemd-logind[1579]: New session 15 of user core. Sep 6 00:21:44.201725 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 6 00:21:46.334822 sshd[4283]: pam_unix(sshd:session): session closed for user core Sep 6 00:21:46.340078 systemd[1]: sshd@14-91.98.90.164:22-139.178.68.195:59224.service: Deactivated successfully. Sep 6 00:21:46.345494 systemd[1]: session-15.scope: Deactivated successfully. Sep 6 00:21:46.346548 systemd-logind[1579]: Session 15 logged out. Waiting for processes to exit. Sep 6 00:21:46.347931 systemd-logind[1579]: Removed session 15. Sep 6 00:21:46.502872 systemd[1]: Started sshd@15-91.98.90.164:22-139.178.68.195:59226.service - OpenSSH per-connection server daemon (139.178.68.195:59226). Sep 6 00:21:47.497208 sshd[4302]: Accepted publickey for core from 139.178.68.195 port 59226 ssh2: RSA SHA256:jxc91lYC6jGmo2vsfpcbx31/qXJlPFNhK53iVaWpnSg Sep 6 00:21:47.499753 sshd[4302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:21:47.505463 systemd-logind[1579]: New session 16 of user core. Sep 6 00:21:47.507757 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 6 00:21:48.380833 sshd[4302]: pam_unix(sshd:session): session closed for user core Sep 6 00:21:48.386395 systemd[1]: sshd@15-91.98.90.164:22-139.178.68.195:59226.service: Deactivated successfully. Sep 6 00:21:48.390042 systemd-logind[1579]: Session 16 logged out. Waiting for processes to exit. Sep 6 00:21:48.391040 systemd[1]: session-16.scope: Deactivated successfully. Sep 6 00:21:48.393094 systemd-logind[1579]: Removed session 16. Sep 6 00:21:48.548751 systemd[1]: Started sshd@16-91.98.90.164:22-139.178.68.195:59232.service - OpenSSH per-connection server daemon (139.178.68.195:59232). Sep 6 00:21:49.552300 sshd[4316]: Accepted publickey for core from 139.178.68.195 port 59232 ssh2: RSA SHA256:jxc91lYC6jGmo2vsfpcbx31/qXJlPFNhK53iVaWpnSg Sep 6 00:21:49.554355 sshd[4316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:21:49.564488 systemd-logind[1579]: New session 17 of user core. Sep 6 00:21:49.568845 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 6 00:21:50.311039 sshd[4316]: pam_unix(sshd:session): session closed for user core Sep 6 00:21:50.318447 systemd[1]: sshd@16-91.98.90.164:22-139.178.68.195:59232.service: Deactivated successfully. Sep 6 00:21:50.319206 systemd-logind[1579]: Session 17 logged out. Waiting for processes to exit. Sep 6 00:21:50.328983 systemd[1]: session-17.scope: Deactivated successfully. Sep 6 00:21:50.332677 systemd-logind[1579]: Removed session 17. Sep 6 00:21:55.480752 systemd[1]: Started sshd@17-91.98.90.164:22-139.178.68.195:46454.service - OpenSSH per-connection server daemon (139.178.68.195:46454). Sep 6 00:21:56.473218 sshd[4333]: Accepted publickey for core from 139.178.68.195 port 46454 ssh2: RSA SHA256:jxc91lYC6jGmo2vsfpcbx31/qXJlPFNhK53iVaWpnSg Sep 6 00:21:56.476358 sshd[4333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:21:56.482784 systemd-logind[1579]: New session 18 of user core. Sep 6 00:21:56.491828 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 6 00:21:57.230541 sshd[4333]: pam_unix(sshd:session): session closed for user core Sep 6 00:21:57.235937 systemd[1]: sshd@17-91.98.90.164:22-139.178.68.195:46454.service: Deactivated successfully. Sep 6 00:21:57.236098 systemd-logind[1579]: Session 18 logged out. Waiting for processes to exit. Sep 6 00:21:57.239512 systemd[1]: session-18.scope: Deactivated successfully. Sep 6 00:21:57.241798 systemd-logind[1579]: Removed session 18. Sep 6 00:22:02.398800 systemd[1]: Started sshd@18-91.98.90.164:22-139.178.68.195:59442.service - OpenSSH per-connection server daemon (139.178.68.195:59442). Sep 6 00:22:03.396683 sshd[4349]: Accepted publickey for core from 139.178.68.195 port 59442 ssh2: RSA SHA256:jxc91lYC6jGmo2vsfpcbx31/qXJlPFNhK53iVaWpnSg Sep 6 00:22:03.401742 sshd[4349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:22:03.414345 systemd-logind[1579]: New session 19 of user core. Sep 6 00:22:03.421763 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 6 00:22:04.158484 sshd[4349]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:04.164561 systemd-logind[1579]: Session 19 logged out. Waiting for processes to exit. Sep 6 00:22:04.165140 systemd[1]: sshd@18-91.98.90.164:22-139.178.68.195:59442.service: Deactivated successfully. Sep 6 00:22:04.169532 systemd[1]: session-19.scope: Deactivated successfully. Sep 6 00:22:04.171468 systemd-logind[1579]: Removed session 19. Sep 6 00:22:04.335891 systemd[1]: Started sshd@19-91.98.90.164:22-139.178.68.195:59450.service - OpenSSH per-connection server daemon (139.178.68.195:59450). Sep 6 00:22:05.386078 sshd[4363]: Accepted publickey for core from 139.178.68.195 port 59450 ssh2: RSA SHA256:jxc91lYC6jGmo2vsfpcbx31/qXJlPFNhK53iVaWpnSg Sep 6 00:22:05.387233 sshd[4363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:22:05.391889 systemd-logind[1579]: New session 20 of user core. Sep 6 00:22:05.401906 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 6 00:22:09.257495 containerd[1598]: time="2025-09-06T00:22:09.255820905Z" level=info msg="StopContainer for \"dd114369e60b34d3cb6b57c77d0a846bb1c2f055dd068dcd99c002adcf2e1f6a\" with timeout 30 (s)" Sep 6 00:22:09.257495 containerd[1598]: time="2025-09-06T00:22:09.257049001Z" level=info msg="Stop container \"dd114369e60b34d3cb6b57c77d0a846bb1c2f055dd068dcd99c002adcf2e1f6a\" with signal terminated" Sep 6 00:22:09.270094 systemd[1]: run-containerd-runc-k8s.io-021d2982334ec8519e0a1e415c1689e1026b33aee9c13f27661eec58e4784c7c-runc.GEyvp9.mount: Deactivated successfully. Sep 6 00:22:09.287115 containerd[1598]: time="2025-09-06T00:22:09.287039389Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:22:09.297177 containerd[1598]: time="2025-09-06T00:22:09.297128480Z" level=info msg="StopContainer for \"021d2982334ec8519e0a1e415c1689e1026b33aee9c13f27661eec58e4784c7c\" with timeout 2 (s)" Sep 6 00:22:09.298185 containerd[1598]: time="2025-09-06T00:22:09.297907210Z" level=info msg="Stop container \"021d2982334ec8519e0a1e415c1689e1026b33aee9c13f27661eec58e4784c7c\" with signal terminated" Sep 6 00:22:09.310404 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd114369e60b34d3cb6b57c77d0a846bb1c2f055dd068dcd99c002adcf2e1f6a-rootfs.mount: Deactivated successfully. Sep 6 00:22:09.313466 containerd[1598]: time="2025-09-06T00:22:09.313185128Z" level=info msg="shim disconnected" id=dd114369e60b34d3cb6b57c77d0a846bb1c2f055dd068dcd99c002adcf2e1f6a namespace=k8s.io Sep 6 00:22:09.313466 containerd[1598]: time="2025-09-06T00:22:09.313286890Z" level=warning msg="cleaning up after shim disconnected" id=dd114369e60b34d3cb6b57c77d0a846bb1c2f055dd068dcd99c002adcf2e1f6a namespace=k8s.io Sep 6 00:22:09.313466 containerd[1598]: time="2025-09-06T00:22:09.313307850Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:22:09.317026 systemd-networkd[1245]: lxc_health: Link DOWN Sep 6 00:22:09.317032 systemd-networkd[1245]: lxc_health: Lost carrier Sep 6 00:22:09.345226 containerd[1598]: time="2025-09-06T00:22:09.344969741Z" level=info msg="StopContainer for \"dd114369e60b34d3cb6b57c77d0a846bb1c2f055dd068dcd99c002adcf2e1f6a\" returns successfully" Sep 6 00:22:09.345965 containerd[1598]: time="2025-09-06T00:22:09.345760551Z" level=info msg="StopPodSandbox for \"8654e49ce706187b8e79bbf5b270020bf90197153a0a757fb28b67b39fae63b5\"" Sep 6 00:22:09.345965 containerd[1598]: time="2025-09-06T00:22:09.345802551Z" level=info msg="Container to stop \"dd114369e60b34d3cb6b57c77d0a846bb1c2f055dd068dcd99c002adcf2e1f6a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:22:09.349220 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8654e49ce706187b8e79bbf5b270020bf90197153a0a757fb28b67b39fae63b5-shm.mount: Deactivated successfully. Sep 6 00:22:09.375378 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-021d2982334ec8519e0a1e415c1689e1026b33aee9c13f27661eec58e4784c7c-rootfs.mount: Deactivated successfully. Sep 6 00:22:09.383888 containerd[1598]: time="2025-09-06T00:22:09.383778524Z" level=info msg="shim disconnected" id=021d2982334ec8519e0a1e415c1689e1026b33aee9c13f27661eec58e4784c7c namespace=k8s.io Sep 6 00:22:09.384144 containerd[1598]: time="2025-09-06T00:22:09.383877925Z" level=warning msg="cleaning up after shim disconnected" id=021d2982334ec8519e0a1e415c1689e1026b33aee9c13f27661eec58e4784c7c namespace=k8s.io Sep 6 00:22:09.384144 containerd[1598]: time="2025-09-06T00:22:09.383908566Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:22:09.398245 containerd[1598]: time="2025-09-06T00:22:09.398184711Z" level=info msg="shim disconnected" id=8654e49ce706187b8e79bbf5b270020bf90197153a0a757fb28b67b39fae63b5 namespace=k8s.io Sep 6 00:22:09.398652 containerd[1598]: time="2025-09-06T00:22:09.398625396Z" level=warning msg="cleaning up after shim disconnected" id=8654e49ce706187b8e79bbf5b270020bf90197153a0a757fb28b67b39fae63b5 namespace=k8s.io Sep 6 00:22:09.398759 containerd[1598]: time="2025-09-06T00:22:09.398743638Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:22:09.407351 containerd[1598]: time="2025-09-06T00:22:09.407301749Z" level=info msg="StopContainer for \"021d2982334ec8519e0a1e415c1689e1026b33aee9c13f27661eec58e4784c7c\" returns successfully" Sep 6 00:22:09.408266 containerd[1598]: time="2025-09-06T00:22:09.408151360Z" level=info msg="StopPodSandbox for \"1d49fe611e6de1eb18bc193b88a91790685289eacfae8c05054382edf28887f1\"" Sep 6 00:22:09.408266 containerd[1598]: time="2025-09-06T00:22:09.408191000Z" level=info msg="Container to stop \"382f84517302f78fa712ba84e85a19623392cb8afbdf1d5c296971e71701df4a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:22:09.408266 containerd[1598]: time="2025-09-06T00:22:09.408202881Z" level=info msg="Container to stop \"021d2982334ec8519e0a1e415c1689e1026b33aee9c13f27661eec58e4784c7c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:22:09.408266 containerd[1598]: time="2025-09-06T00:22:09.408212441Z" level=info msg="Container to stop \"dc473666165dc0e69c5cf112b4b9ff518e24246d0b6a76a4cde1a5ee01d72cd2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:22:09.408266 containerd[1598]: time="2025-09-06T00:22:09.408221761Z" level=info msg="Container to stop \"7868790d6d6be19f5228363992f54f8a8903cf7da84d89412240b9e5acc2d152\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:22:09.408266 containerd[1598]: time="2025-09-06T00:22:09.408236401Z" level=info msg="Container to stop \"6e66f9c3cd295ca10e152dc008a53aa9ae6d1714067fa3fec2aefcebf1d04007\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:22:09.421576 containerd[1598]: time="2025-09-06T00:22:09.421518013Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:22:09Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 6 00:22:09.423254 containerd[1598]: time="2025-09-06T00:22:09.423220555Z" level=info msg="TearDown network for sandbox \"8654e49ce706187b8e79bbf5b270020bf90197153a0a757fb28b67b39fae63b5\" successfully" Sep 6 00:22:09.423254 containerd[1598]: time="2025-09-06T00:22:09.423278036Z" level=info msg="StopPodSandbox for \"8654e49ce706187b8e79bbf5b270020bf90197153a0a757fb28b67b39fae63b5\" returns successfully" Sep 6 00:22:09.447516 kubelet[2795]: I0906 00:22:09.445744 2795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fc515cd1-4852-4280-952d-fcf70beef69a-cilium-config-path\") pod \"fc515cd1-4852-4280-952d-fcf70beef69a\" (UID: \"fc515cd1-4852-4280-952d-fcf70beef69a\") " Sep 6 00:22:09.447516 kubelet[2795]: I0906 00:22:09.445789 2795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txqn7\" (UniqueName: \"kubernetes.io/projected/fc515cd1-4852-4280-952d-fcf70beef69a-kube-api-access-txqn7\") pod \"fc515cd1-4852-4280-952d-fcf70beef69a\" (UID: \"fc515cd1-4852-4280-952d-fcf70beef69a\") " Sep 6 00:22:09.450692 kubelet[2795]: I0906 00:22:09.450630 2795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc515cd1-4852-4280-952d-fcf70beef69a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fc515cd1-4852-4280-952d-fcf70beef69a" (UID: "fc515cd1-4852-4280-952d-fcf70beef69a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 00:22:09.455841 containerd[1598]: time="2025-09-06T00:22:09.455617535Z" level=info msg="shim disconnected" id=1d49fe611e6de1eb18bc193b88a91790685289eacfae8c05054382edf28887f1 namespace=k8s.io Sep 6 00:22:09.455841 containerd[1598]: time="2025-09-06T00:22:09.455674416Z" level=warning msg="cleaning up after shim disconnected" id=1d49fe611e6de1eb18bc193b88a91790685289eacfae8c05054382edf28887f1 namespace=k8s.io Sep 6 00:22:09.455841 containerd[1598]: time="2025-09-06T00:22:09.455686296Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:22:09.456263 kubelet[2795]: I0906 00:22:09.455992 2795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc515cd1-4852-4280-952d-fcf70beef69a-kube-api-access-txqn7" (OuterVolumeSpecName: "kube-api-access-txqn7") pod "fc515cd1-4852-4280-952d-fcf70beef69a" (UID: "fc515cd1-4852-4280-952d-fcf70beef69a"). InnerVolumeSpecName "kube-api-access-txqn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:22:09.472289 containerd[1598]: time="2025-09-06T00:22:09.472146110Z" level=info msg="TearDown network for sandbox \"1d49fe611e6de1eb18bc193b88a91790685289eacfae8c05054382edf28887f1\" successfully" Sep 6 00:22:09.472289 containerd[1598]: time="2025-09-06T00:22:09.472188550Z" level=info msg="StopPodSandbox for \"1d49fe611e6de1eb18bc193b88a91790685289eacfae8c05054382edf28887f1\" returns successfully" Sep 6 00:22:09.546683 kubelet[2795]: I0906 00:22:09.546483 2795 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fc515cd1-4852-4280-952d-fcf70beef69a-cilium-config-path\") on node \"ci-4081-3-5-n-5ce2877658\" DevicePath \"\"" Sep 6 00:22:09.546683 kubelet[2795]: I0906 00:22:09.546537 2795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txqn7\" (UniqueName: \"kubernetes.io/projected/fc515cd1-4852-4280-952d-fcf70beef69a-kube-api-access-txqn7\") on node \"ci-4081-3-5-n-5ce2877658\" DevicePath \"\"" Sep 6 00:22:09.646846 kubelet[2795]: I0906 00:22:09.646790 2795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p92h2\" (UniqueName: \"kubernetes.io/projected/957ee464-af96-4828-822e-f95cfbd5e80a-kube-api-access-p92h2\") pod \"957ee464-af96-4828-822e-f95cfbd5e80a\" (UID: \"957ee464-af96-4828-822e-f95cfbd5e80a\") " Sep 6 00:22:09.646846 kubelet[2795]: I0906 00:22:09.646855 2795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/957ee464-af96-4828-822e-f95cfbd5e80a-clustermesh-secrets\") pod \"957ee464-af96-4828-822e-f95cfbd5e80a\" (UID: \"957ee464-af96-4828-822e-f95cfbd5e80a\") " Sep 6 00:22:09.647030 kubelet[2795]: I0906 00:22:09.646881 2795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/957ee464-af96-4828-822e-f95cfbd5e80a-host-proc-sys-net\") pod \"957ee464-af96-4828-822e-f95cfbd5e80a\" (UID: \"957ee464-af96-4828-822e-f95cfbd5e80a\") " Sep 6 00:22:09.647030 kubelet[2795]: I0906 00:22:09.646903 2795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/957ee464-af96-4828-822e-f95cfbd5e80a-cilium-run\") pod \"957ee464-af96-4828-822e-f95cfbd5e80a\" (UID: \"957ee464-af96-4828-822e-f95cfbd5e80a\") " Sep 6 00:22:09.647030 kubelet[2795]: I0906 00:22:09.646925 2795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/957ee464-af96-4828-822e-f95cfbd5e80a-cilium-cgroup\") pod \"957ee464-af96-4828-822e-f95cfbd5e80a\" (UID: \"957ee464-af96-4828-822e-f95cfbd5e80a\") " Sep 6 00:22:09.647030 kubelet[2795]: I0906 00:22:09.646977 2795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/957ee464-af96-4828-822e-f95cfbd5e80a-hubble-tls\") pod \"957ee464-af96-4828-822e-f95cfbd5e80a\" (UID: \"957ee464-af96-4828-822e-f95cfbd5e80a\") " Sep 6 00:22:09.647030 kubelet[2795]: I0906 00:22:09.646999 2795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/957ee464-af96-4828-822e-f95cfbd5e80a-hostproc\") pod \"957ee464-af96-4828-822e-f95cfbd5e80a\" (UID: \"957ee464-af96-4828-822e-f95cfbd5e80a\") " Sep 6 00:22:09.647030 kubelet[2795]: I0906 00:22:09.647021 2795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/957ee464-af96-4828-822e-f95cfbd5e80a-xtables-lock\") pod \"957ee464-af96-4828-822e-f95cfbd5e80a\" (UID: \"957ee464-af96-4828-822e-f95cfbd5e80a\") " Sep 6 00:22:09.647190 kubelet[2795]: I0906 00:22:09.647045 2795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/957ee464-af96-4828-822e-f95cfbd5e80a-etc-cni-netd\") pod \"957ee464-af96-4828-822e-f95cfbd5e80a\" (UID: \"957ee464-af96-4828-822e-f95cfbd5e80a\") " Sep 6 00:22:09.647190 kubelet[2795]: I0906 00:22:09.647065 2795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/957ee464-af96-4828-822e-f95cfbd5e80a-lib-modules\") pod \"957ee464-af96-4828-822e-f95cfbd5e80a\" (UID: \"957ee464-af96-4828-822e-f95cfbd5e80a\") " Sep 6 00:22:09.647190 kubelet[2795]: I0906 00:22:09.647089 2795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/957ee464-af96-4828-822e-f95cfbd5e80a-host-proc-sys-kernel\") pod \"957ee464-af96-4828-822e-f95cfbd5e80a\" (UID: \"957ee464-af96-4828-822e-f95cfbd5e80a\") " Sep 6 00:22:09.647190 kubelet[2795]: I0906 00:22:09.647117 2795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/957ee464-af96-4828-822e-f95cfbd5e80a-cilium-config-path\") pod \"957ee464-af96-4828-822e-f95cfbd5e80a\" (UID: \"957ee464-af96-4828-822e-f95cfbd5e80a\") " Sep 6 00:22:09.647190 kubelet[2795]: I0906 00:22:09.647141 2795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/957ee464-af96-4828-822e-f95cfbd5e80a-cni-path\") pod \"957ee464-af96-4828-822e-f95cfbd5e80a\" (UID: \"957ee464-af96-4828-822e-f95cfbd5e80a\") " Sep 6 00:22:09.647190 kubelet[2795]: I0906 00:22:09.647162 2795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/957ee464-af96-4828-822e-f95cfbd5e80a-bpf-maps\") pod \"957ee464-af96-4828-822e-f95cfbd5e80a\" (UID: \"957ee464-af96-4828-822e-f95cfbd5e80a\") " Sep 6 00:22:09.647315 kubelet[2795]: I0906 00:22:09.647254 2795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/957ee464-af96-4828-822e-f95cfbd5e80a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "957ee464-af96-4828-822e-f95cfbd5e80a" (UID: "957ee464-af96-4828-822e-f95cfbd5e80a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:22:09.647999 kubelet[2795]: I0906 00:22:09.647824 2795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/957ee464-af96-4828-822e-f95cfbd5e80a-hostproc" (OuterVolumeSpecName: "hostproc") pod "957ee464-af96-4828-822e-f95cfbd5e80a" (UID: "957ee464-af96-4828-822e-f95cfbd5e80a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:22:09.650356 kubelet[2795]: I0906 00:22:09.650025 2795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/957ee464-af96-4828-822e-f95cfbd5e80a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "957ee464-af96-4828-822e-f95cfbd5e80a" (UID: "957ee464-af96-4828-822e-f95cfbd5e80a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:22:09.650356 kubelet[2795]: I0906 00:22:09.650071 2795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/957ee464-af96-4828-822e-f95cfbd5e80a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "957ee464-af96-4828-822e-f95cfbd5e80a" (UID: "957ee464-af96-4828-822e-f95cfbd5e80a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:22:09.650356 kubelet[2795]: I0906 00:22:09.650086 2795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/957ee464-af96-4828-822e-f95cfbd5e80a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "957ee464-af96-4828-822e-f95cfbd5e80a" (UID: "957ee464-af96-4828-822e-f95cfbd5e80a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:22:09.650544 kubelet[2795]: I0906 00:22:09.650366 2795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/957ee464-af96-4828-822e-f95cfbd5e80a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "957ee464-af96-4828-822e-f95cfbd5e80a" (UID: "957ee464-af96-4828-822e-f95cfbd5e80a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:22:09.650544 kubelet[2795]: I0906 00:22:09.650399 2795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/957ee464-af96-4828-822e-f95cfbd5e80a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "957ee464-af96-4828-822e-f95cfbd5e80a" (UID: "957ee464-af96-4828-822e-f95cfbd5e80a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:22:09.650544 kubelet[2795]: I0906 00:22:09.650415 2795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/957ee464-af96-4828-822e-f95cfbd5e80a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "957ee464-af96-4828-822e-f95cfbd5e80a" (UID: "957ee464-af96-4828-822e-f95cfbd5e80a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:22:09.650544 kubelet[2795]: I0906 00:22:09.650471 2795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/957ee464-af96-4828-822e-f95cfbd5e80a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "957ee464-af96-4828-822e-f95cfbd5e80a" (UID: "957ee464-af96-4828-822e-f95cfbd5e80a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:22:09.652664 kubelet[2795]: I0906 00:22:09.652622 2795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/957ee464-af96-4828-822e-f95cfbd5e80a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "957ee464-af96-4828-822e-f95cfbd5e80a" (UID: "957ee464-af96-4828-822e-f95cfbd5e80a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 00:22:09.652775 kubelet[2795]: I0906 00:22:09.652695 2795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/957ee464-af96-4828-822e-f95cfbd5e80a-cni-path" (OuterVolumeSpecName: "cni-path") pod "957ee464-af96-4828-822e-f95cfbd5e80a" (UID: "957ee464-af96-4828-822e-f95cfbd5e80a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:22:09.652803 kubelet[2795]: I0906 00:22:09.652786 2795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/957ee464-af96-4828-822e-f95cfbd5e80a-kube-api-access-p92h2" (OuterVolumeSpecName: "kube-api-access-p92h2") pod "957ee464-af96-4828-822e-f95cfbd5e80a" (UID: "957ee464-af96-4828-822e-f95cfbd5e80a"). InnerVolumeSpecName "kube-api-access-p92h2". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:22:09.653841 kubelet[2795]: I0906 00:22:09.653725 2795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/957ee464-af96-4828-822e-f95cfbd5e80a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "957ee464-af96-4828-822e-f95cfbd5e80a" (UID: "957ee464-af96-4828-822e-f95cfbd5e80a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 6 00:22:09.654809 kubelet[2795]: I0906 00:22:09.654781 2795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/957ee464-af96-4828-822e-f95cfbd5e80a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "957ee464-af96-4828-822e-f95cfbd5e80a" (UID: "957ee464-af96-4828-822e-f95cfbd5e80a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:22:09.748401 kubelet[2795]: I0906 00:22:09.748324 2795 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/957ee464-af96-4828-822e-f95cfbd5e80a-host-proc-sys-kernel\") on node \"ci-4081-3-5-n-5ce2877658\" DevicePath \"\"" Sep 6 00:22:09.748401 kubelet[2795]: I0906 00:22:09.748388 2795 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/957ee464-af96-4828-822e-f95cfbd5e80a-cilium-config-path\") on node \"ci-4081-3-5-n-5ce2877658\" DevicePath \"\"" Sep 6 00:22:09.748645 kubelet[2795]: I0906 00:22:09.748419 2795 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/957ee464-af96-4828-822e-f95cfbd5e80a-cni-path\") on node \"ci-4081-3-5-n-5ce2877658\" DevicePath \"\"" Sep 6 00:22:09.748645 kubelet[2795]: I0906 00:22:09.748465 2795 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/957ee464-af96-4828-822e-f95cfbd5e80a-bpf-maps\") on node \"ci-4081-3-5-n-5ce2877658\" DevicePath \"\"" Sep 6 00:22:09.748645 kubelet[2795]: I0906 00:22:09.748487 2795 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p92h2\" (UniqueName: \"kubernetes.io/projected/957ee464-af96-4828-822e-f95cfbd5e80a-kube-api-access-p92h2\") on node \"ci-4081-3-5-n-5ce2877658\" DevicePath \"\"" Sep 6 00:22:09.748645 kubelet[2795]: I0906 00:22:09.748509 2795 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/957ee464-af96-4828-822e-f95cfbd5e80a-clustermesh-secrets\") on node \"ci-4081-3-5-n-5ce2877658\" DevicePath \"\"" Sep 6 00:22:09.748645 kubelet[2795]: I0906 00:22:09.748530 2795 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/957ee464-af96-4828-822e-f95cfbd5e80a-host-proc-sys-net\") on node \"ci-4081-3-5-n-5ce2877658\" DevicePath \"\"" Sep 6 00:22:09.748645 kubelet[2795]: I0906 00:22:09.748552 2795 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/957ee464-af96-4828-822e-f95cfbd5e80a-cilium-run\") on node \"ci-4081-3-5-n-5ce2877658\" DevicePath \"\"" Sep 6 00:22:09.748645 kubelet[2795]: I0906 00:22:09.748589 2795 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/957ee464-af96-4828-822e-f95cfbd5e80a-cilium-cgroup\") on node \"ci-4081-3-5-n-5ce2877658\" DevicePath \"\"" Sep 6 00:22:09.748645 kubelet[2795]: I0906 00:22:09.748617 2795 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/957ee464-af96-4828-822e-f95cfbd5e80a-hubble-tls\") on node \"ci-4081-3-5-n-5ce2877658\" DevicePath \"\"" Sep 6 00:22:09.748995 kubelet[2795]: I0906 00:22:09.748640 2795 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/957ee464-af96-4828-822e-f95cfbd5e80a-hostproc\") on node \"ci-4081-3-5-n-5ce2877658\" DevicePath \"\"" Sep 6 00:22:09.748995 kubelet[2795]: I0906 00:22:09.748660 2795 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/957ee464-af96-4828-822e-f95cfbd5e80a-xtables-lock\") on node \"ci-4081-3-5-n-5ce2877658\" DevicePath \"\"" Sep 6 00:22:09.748995 kubelet[2795]: I0906 00:22:09.748703 2795 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/957ee464-af96-4828-822e-f95cfbd5e80a-etc-cni-netd\") on node \"ci-4081-3-5-n-5ce2877658\" DevicePath \"\"" Sep 6 00:22:09.748995 kubelet[2795]: I0906 00:22:09.748764 2795 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/957ee464-af96-4828-822e-f95cfbd5e80a-lib-modules\") on node \"ci-4081-3-5-n-5ce2877658\" DevicePath \"\"" Sep 6 00:22:10.181451 kubelet[2795]: I0906 00:22:10.181367 2795 scope.go:117] "RemoveContainer" containerID="021d2982334ec8519e0a1e415c1689e1026b33aee9c13f27661eec58e4784c7c" Sep 6 00:22:10.185981 containerd[1598]: time="2025-09-06T00:22:10.185743533Z" level=info msg="RemoveContainer for \"021d2982334ec8519e0a1e415c1689e1026b33aee9c13f27661eec58e4784c7c\"" Sep 6 00:22:10.193768 containerd[1598]: time="2025-09-06T00:22:10.193608594Z" level=info msg="RemoveContainer for \"021d2982334ec8519e0a1e415c1689e1026b33aee9c13f27661eec58e4784c7c\" returns successfully" Sep 6 00:22:10.201151 kubelet[2795]: I0906 00:22:10.201083 2795 scope.go:117] "RemoveContainer" containerID="7868790d6d6be19f5228363992f54f8a8903cf7da84d89412240b9e5acc2d152" Sep 6 00:22:10.207612 containerd[1598]: time="2025-09-06T00:22:10.207353050Z" level=info msg="RemoveContainer for \"7868790d6d6be19f5228363992f54f8a8903cf7da84d89412240b9e5acc2d152\"" Sep 6 00:22:10.214114 containerd[1598]: time="2025-09-06T00:22:10.213906574Z" level=info msg="RemoveContainer for \"7868790d6d6be19f5228363992f54f8a8903cf7da84d89412240b9e5acc2d152\" returns successfully" Sep 6 00:22:10.214988 kubelet[2795]: I0906 00:22:10.214793 2795 scope.go:117] "RemoveContainer" containerID="dc473666165dc0e69c5cf112b4b9ff518e24246d0b6a76a4cde1a5ee01d72cd2" Sep 6 00:22:10.219736 containerd[1598]: time="2025-09-06T00:22:10.219459845Z" level=info msg="RemoveContainer for \"dc473666165dc0e69c5cf112b4b9ff518e24246d0b6a76a4cde1a5ee01d72cd2\"" Sep 6 00:22:10.228618 containerd[1598]: time="2025-09-06T00:22:10.228568442Z" level=info msg="RemoveContainer for \"dc473666165dc0e69c5cf112b4b9ff518e24246d0b6a76a4cde1a5ee01d72cd2\" returns successfully" Sep 6 00:22:10.229210 kubelet[2795]: I0906 00:22:10.229038 2795 scope.go:117] "RemoveContainer" containerID="382f84517302f78fa712ba84e85a19623392cb8afbdf1d5c296971e71701df4a" Sep 6 00:22:10.231704 containerd[1598]: time="2025-09-06T00:22:10.231249956Z" level=info msg="RemoveContainer for \"382f84517302f78fa712ba84e85a19623392cb8afbdf1d5c296971e71701df4a\"" Sep 6 00:22:10.245826 containerd[1598]: time="2025-09-06T00:22:10.245783822Z" level=info msg="RemoveContainer for \"382f84517302f78fa712ba84e85a19623392cb8afbdf1d5c296971e71701df4a\" returns successfully" Sep 6 00:22:10.246324 kubelet[2795]: I0906 00:22:10.246212 2795 scope.go:117] "RemoveContainer" containerID="6e66f9c3cd295ca10e152dc008a53aa9ae6d1714067fa3fec2aefcebf1d04007" Sep 6 00:22:10.247353 containerd[1598]: time="2025-09-06T00:22:10.247323322Z" level=info msg="RemoveContainer for \"6e66f9c3cd295ca10e152dc008a53aa9ae6d1714067fa3fec2aefcebf1d04007\"" Sep 6 00:22:10.252319 containerd[1598]: time="2025-09-06T00:22:10.252273705Z" level=info msg="RemoveContainer for \"6e66f9c3cd295ca10e152dc008a53aa9ae6d1714067fa3fec2aefcebf1d04007\" returns successfully" Sep 6 00:22:10.252875 kubelet[2795]: I0906 00:22:10.252733 2795 scope.go:117] "RemoveContainer" containerID="021d2982334ec8519e0a1e415c1689e1026b33aee9c13f27661eec58e4784c7c" Sep 6 00:22:10.253211 containerd[1598]: time="2025-09-06T00:22:10.253153756Z" level=error msg="ContainerStatus for \"021d2982334ec8519e0a1e415c1689e1026b33aee9c13f27661eec58e4784c7c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"021d2982334ec8519e0a1e415c1689e1026b33aee9c13f27661eec58e4784c7c\": not found" Sep 6 00:22:10.253537 kubelet[2795]: E0906 00:22:10.253392 2795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"021d2982334ec8519e0a1e415c1689e1026b33aee9c13f27661eec58e4784c7c\": not found" containerID="021d2982334ec8519e0a1e415c1689e1026b33aee9c13f27661eec58e4784c7c" Sep 6 00:22:10.253631 kubelet[2795]: I0906 00:22:10.253483 2795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"021d2982334ec8519e0a1e415c1689e1026b33aee9c13f27661eec58e4784c7c"} err="failed to get container status \"021d2982334ec8519e0a1e415c1689e1026b33aee9c13f27661eec58e4784c7c\": rpc error: code = NotFound desc = an error occurred when try to find container \"021d2982334ec8519e0a1e415c1689e1026b33aee9c13f27661eec58e4784c7c\": not found" Sep 6 00:22:10.253669 kubelet[2795]: I0906 00:22:10.253628 2795 scope.go:117] "RemoveContainer" containerID="7868790d6d6be19f5228363992f54f8a8903cf7da84d89412240b9e5acc2d152" Sep 6 00:22:10.254004 containerd[1598]: time="2025-09-06T00:22:10.253880206Z" level=error msg="ContainerStatus for \"7868790d6d6be19f5228363992f54f8a8903cf7da84d89412240b9e5acc2d152\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7868790d6d6be19f5228363992f54f8a8903cf7da84d89412240b9e5acc2d152\": not found" Sep 6 00:22:10.254366 kubelet[2795]: E0906 00:22:10.254244 2795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7868790d6d6be19f5228363992f54f8a8903cf7da84d89412240b9e5acc2d152\": not found" containerID="7868790d6d6be19f5228363992f54f8a8903cf7da84d89412240b9e5acc2d152" Sep 6 00:22:10.254366 kubelet[2795]: I0906 00:22:10.254271 2795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7868790d6d6be19f5228363992f54f8a8903cf7da84d89412240b9e5acc2d152"} err="failed to get container status \"7868790d6d6be19f5228363992f54f8a8903cf7da84d89412240b9e5acc2d152\": rpc error: code = NotFound desc = an error occurred when try to find container \"7868790d6d6be19f5228363992f54f8a8903cf7da84d89412240b9e5acc2d152\": not found" Sep 6 00:22:10.254366 kubelet[2795]: I0906 00:22:10.254294 2795 scope.go:117] "RemoveContainer" containerID="dc473666165dc0e69c5cf112b4b9ff518e24246d0b6a76a4cde1a5ee01d72cd2" Sep 6 00:22:10.254533 containerd[1598]: time="2025-09-06T00:22:10.254493214Z" level=error msg="ContainerStatus for \"dc473666165dc0e69c5cf112b4b9ff518e24246d0b6a76a4cde1a5ee01d72cd2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dc473666165dc0e69c5cf112b4b9ff518e24246d0b6a76a4cde1a5ee01d72cd2\": not found" Sep 6 00:22:10.254840 kubelet[2795]: E0906 00:22:10.254704 2795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dc473666165dc0e69c5cf112b4b9ff518e24246d0b6a76a4cde1a5ee01d72cd2\": not found" containerID="dc473666165dc0e69c5cf112b4b9ff518e24246d0b6a76a4cde1a5ee01d72cd2" Sep 6 00:22:10.254840 kubelet[2795]: I0906 00:22:10.254727 2795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dc473666165dc0e69c5cf112b4b9ff518e24246d0b6a76a4cde1a5ee01d72cd2"} err="failed to get container status \"dc473666165dc0e69c5cf112b4b9ff518e24246d0b6a76a4cde1a5ee01d72cd2\": rpc error: code = NotFound desc = an error occurred when try to find container \"dc473666165dc0e69c5cf112b4b9ff518e24246d0b6a76a4cde1a5ee01d72cd2\": not found" Sep 6 00:22:10.254840 kubelet[2795]: I0906 00:22:10.254741 2795 scope.go:117] "RemoveContainer" containerID="382f84517302f78fa712ba84e85a19623392cb8afbdf1d5c296971e71701df4a" Sep 6 00:22:10.255056 containerd[1598]: time="2025-09-06T00:22:10.254895379Z" level=error msg="ContainerStatus for \"382f84517302f78fa712ba84e85a19623392cb8afbdf1d5c296971e71701df4a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"382f84517302f78fa712ba84e85a19623392cb8afbdf1d5c296971e71701df4a\": not found" Sep 6 00:22:10.255319 kubelet[2795]: E0906 00:22:10.255138 2795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"382f84517302f78fa712ba84e85a19623392cb8afbdf1d5c296971e71701df4a\": not found" containerID="382f84517302f78fa712ba84e85a19623392cb8afbdf1d5c296971e71701df4a" Sep 6 00:22:10.255319 kubelet[2795]: I0906 00:22:10.255222 2795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"382f84517302f78fa712ba84e85a19623392cb8afbdf1d5c296971e71701df4a"} err="failed to get container status \"382f84517302f78fa712ba84e85a19623392cb8afbdf1d5c296971e71701df4a\": rpc error: code = NotFound desc = an error occurred when try to find container \"382f84517302f78fa712ba84e85a19623392cb8afbdf1d5c296971e71701df4a\": not found" Sep 6 00:22:10.255319 kubelet[2795]: I0906 00:22:10.255239 2795 scope.go:117] "RemoveContainer" containerID="6e66f9c3cd295ca10e152dc008a53aa9ae6d1714067fa3fec2aefcebf1d04007" Sep 6 00:22:10.255568 containerd[1598]: time="2025-09-06T00:22:10.255456706Z" level=error msg="ContainerStatus for \"6e66f9c3cd295ca10e152dc008a53aa9ae6d1714067fa3fec2aefcebf1d04007\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6e66f9c3cd295ca10e152dc008a53aa9ae6d1714067fa3fec2aefcebf1d04007\": not found" Sep 6 00:22:10.255785 kubelet[2795]: E0906 00:22:10.255661 2795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6e66f9c3cd295ca10e152dc008a53aa9ae6d1714067fa3fec2aefcebf1d04007\": not found" containerID="6e66f9c3cd295ca10e152dc008a53aa9ae6d1714067fa3fec2aefcebf1d04007" Sep 6 00:22:10.255785 kubelet[2795]: I0906 00:22:10.255687 2795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6e66f9c3cd295ca10e152dc008a53aa9ae6d1714067fa3fec2aefcebf1d04007"} err="failed to get container status \"6e66f9c3cd295ca10e152dc008a53aa9ae6d1714067fa3fec2aefcebf1d04007\": rpc error: code = NotFound desc = an error occurred when try to find container \"6e66f9c3cd295ca10e152dc008a53aa9ae6d1714067fa3fec2aefcebf1d04007\": not found" Sep 6 00:22:10.255785 kubelet[2795]: I0906 00:22:10.255701 2795 scope.go:117] "RemoveContainer" containerID="dd114369e60b34d3cb6b57c77d0a846bb1c2f055dd068dcd99c002adcf2e1f6a" Sep 6 00:22:10.258220 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8654e49ce706187b8e79bbf5b270020bf90197153a0a757fb28b67b39fae63b5-rootfs.mount: Deactivated successfully. Sep 6 00:22:10.258819 systemd[1]: var-lib-kubelet-pods-fc515cd1\x2d4852\x2d4280\x2d952d\x2dfcf70beef69a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtxqn7.mount: Deactivated successfully. Sep 6 00:22:10.259438 containerd[1598]: time="2025-09-06T00:22:10.259339716Z" level=info msg="RemoveContainer for \"dd114369e60b34d3cb6b57c77d0a846bb1c2f055dd068dcd99c002adcf2e1f6a\"" Sep 6 00:22:10.260111 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d49fe611e6de1eb18bc193b88a91790685289eacfae8c05054382edf28887f1-rootfs.mount: Deactivated successfully. Sep 6 00:22:10.260326 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1d49fe611e6de1eb18bc193b88a91790685289eacfae8c05054382edf28887f1-shm.mount: Deactivated successfully. Sep 6 00:22:10.260409 systemd[1]: var-lib-kubelet-pods-957ee464\x2daf96\x2d4828\x2d822e\x2df95cfbd5e80a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp92h2.mount: Deactivated successfully. Sep 6 00:22:10.260521 systemd[1]: var-lib-kubelet-pods-957ee464\x2daf96\x2d4828\x2d822e\x2df95cfbd5e80a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 00:22:10.260604 systemd[1]: var-lib-kubelet-pods-957ee464\x2daf96\x2d4828\x2d822e\x2df95cfbd5e80a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 00:22:10.266843 containerd[1598]: time="2025-09-06T00:22:10.266801491Z" level=info msg="RemoveContainer for \"dd114369e60b34d3cb6b57c77d0a846bb1c2f055dd068dcd99c002adcf2e1f6a\" returns successfully" Sep 6 00:22:10.267302 kubelet[2795]: I0906 00:22:10.267265 2795 scope.go:117] "RemoveContainer" containerID="dd114369e60b34d3cb6b57c77d0a846bb1c2f055dd068dcd99c002adcf2e1f6a" Sep 6 00:22:10.267683 containerd[1598]: time="2025-09-06T00:22:10.267629302Z" level=error msg="ContainerStatus for \"dd114369e60b34d3cb6b57c77d0a846bb1c2f055dd068dcd99c002adcf2e1f6a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dd114369e60b34d3cb6b57c77d0a846bb1c2f055dd068dcd99c002adcf2e1f6a\": not found" Sep 6 00:22:10.267910 kubelet[2795]: E0906 00:22:10.267853 2795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dd114369e60b34d3cb6b57c77d0a846bb1c2f055dd068dcd99c002adcf2e1f6a\": not found" containerID="dd114369e60b34d3cb6b57c77d0a846bb1c2f055dd068dcd99c002adcf2e1f6a" Sep 6 00:22:10.267910 kubelet[2795]: I0906 00:22:10.267887 2795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dd114369e60b34d3cb6b57c77d0a846bb1c2f055dd068dcd99c002adcf2e1f6a"} err="failed to get container status \"dd114369e60b34d3cb6b57c77d0a846bb1c2f055dd068dcd99c002adcf2e1f6a\": rpc error: code = NotFound desc = an error occurred when try to find container \"dd114369e60b34d3cb6b57c77d0a846bb1c2f055dd068dcd99c002adcf2e1f6a\": not found" Sep 6 00:22:10.504558 kubelet[2795]: I0906 00:22:10.503585 2795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="957ee464-af96-4828-822e-f95cfbd5e80a" path="/var/lib/kubelet/pods/957ee464-af96-4828-822e-f95cfbd5e80a/volumes" Sep 6 00:22:10.504558 kubelet[2795]: I0906 00:22:10.504223 2795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc515cd1-4852-4280-952d-fcf70beef69a" path="/var/lib/kubelet/pods/fc515cd1-4852-4280-952d-fcf70beef69a/volumes" Sep 6 00:22:11.344809 sshd[4363]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:11.349362 systemd[1]: sshd@19-91.98.90.164:22-139.178.68.195:59450.service: Deactivated successfully. Sep 6 00:22:11.354368 systemd-logind[1579]: Session 20 logged out. Waiting for processes to exit. Sep 6 00:22:11.355162 systemd[1]: session-20.scope: Deactivated successfully. Sep 6 00:22:11.358099 systemd-logind[1579]: Removed session 20. Sep 6 00:22:11.512823 systemd[1]: Started sshd@20-91.98.90.164:22-139.178.68.195:53030.service - OpenSSH per-connection server daemon (139.178.68.195:53030). Sep 6 00:22:12.513018 sshd[4529]: Accepted publickey for core from 139.178.68.195 port 53030 ssh2: RSA SHA256:jxc91lYC6jGmo2vsfpcbx31/qXJlPFNhK53iVaWpnSg Sep 6 00:22:12.515851 sshd[4529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:22:12.529634 systemd-logind[1579]: New session 21 of user core. Sep 6 00:22:12.534725 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 6 00:22:13.656724 kubelet[2795]: E0906 00:22:13.656658 2795 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:22:14.131890 kubelet[2795]: E0906 00:22:14.131838 2795 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="957ee464-af96-4828-822e-f95cfbd5e80a" containerName="mount-cgroup" Sep 6 00:22:14.131890 kubelet[2795]: E0906 00:22:14.131874 2795 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="957ee464-af96-4828-822e-f95cfbd5e80a" containerName="clean-cilium-state" Sep 6 00:22:14.131890 kubelet[2795]: E0906 00:22:14.131884 2795 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="957ee464-af96-4828-822e-f95cfbd5e80a" containerName="apply-sysctl-overwrites" Sep 6 00:22:14.131890 kubelet[2795]: E0906 00:22:14.131890 2795 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fc515cd1-4852-4280-952d-fcf70beef69a" containerName="cilium-operator" Sep 6 00:22:14.131890 kubelet[2795]: E0906 00:22:14.131895 2795 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="957ee464-af96-4828-822e-f95cfbd5e80a" containerName="mount-bpf-fs" Sep 6 00:22:14.131890 kubelet[2795]: E0906 00:22:14.131901 2795 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="957ee464-af96-4828-822e-f95cfbd5e80a" containerName="cilium-agent" Sep 6 00:22:14.132219 kubelet[2795]: I0906 00:22:14.131923 2795 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc515cd1-4852-4280-952d-fcf70beef69a" containerName="cilium-operator" Sep 6 00:22:14.132219 kubelet[2795]: I0906 00:22:14.131983 2795 memory_manager.go:354] "RemoveStaleState removing state" podUID="957ee464-af96-4828-822e-f95cfbd5e80a" containerName="cilium-agent" Sep 6 00:22:14.143655 kubelet[2795]: W0906 00:22:14.142643 2795 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4081-3-5-n-5ce2877658" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-5-n-5ce2877658' and this object Sep 6 00:22:14.143655 kubelet[2795]: E0906 00:22:14.142691 2795 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4081-3-5-n-5ce2877658\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-5-n-5ce2877658' and this object" logger="UnhandledError" Sep 6 00:22:14.143655 kubelet[2795]: W0906 00:22:14.142739 2795 reflector.go:561] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-4081-3-5-n-5ce2877658" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-5-n-5ce2877658' and this object Sep 6 00:22:14.143655 kubelet[2795]: E0906 00:22:14.142751 2795 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ci-4081-3-5-n-5ce2877658\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-5-n-5ce2877658' and this object" logger="UnhandledError" Sep 6 00:22:14.143655 kubelet[2795]: W0906 00:22:14.142786 2795 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4081-3-5-n-5ce2877658" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-5-n-5ce2877658' and this object Sep 6 00:22:14.143890 kubelet[2795]: E0906 00:22:14.142798 2795 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-4081-3-5-n-5ce2877658\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-5-n-5ce2877658' and this object" logger="UnhandledError" Sep 6 00:22:14.143890 kubelet[2795]: W0906 00:22:14.142869 2795 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4081-3-5-n-5ce2877658" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-5-n-5ce2877658' and this object Sep 6 00:22:14.143890 kubelet[2795]: E0906 00:22:14.142882 2795 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-4081-3-5-n-5ce2877658\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-5-n-5ce2877658' and this object" logger="UnhandledError" Sep 6 00:22:14.178205 kubelet[2795]: I0906 00:22:14.176481 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/df46d157-c6d1-47df-892f-2a4a6200f733-host-proc-sys-net\") pod \"cilium-8t7jh\" (UID: \"df46d157-c6d1-47df-892f-2a4a6200f733\") " pod="kube-system/cilium-8t7jh" Sep 6 00:22:14.178622 kubelet[2795]: I0906 00:22:14.178498 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/df46d157-c6d1-47df-892f-2a4a6200f733-cilium-cgroup\") pod \"cilium-8t7jh\" (UID: \"df46d157-c6d1-47df-892f-2a4a6200f733\") " pod="kube-system/cilium-8t7jh" Sep 6 00:22:14.178622 kubelet[2795]: I0906 00:22:14.178531 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/df46d157-c6d1-47df-892f-2a4a6200f733-cilium-ipsec-secrets\") pod \"cilium-8t7jh\" (UID: \"df46d157-c6d1-47df-892f-2a4a6200f733\") " pod="kube-system/cilium-8t7jh" Sep 6 00:22:14.178622 kubelet[2795]: I0906 00:22:14.178580 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/df46d157-c6d1-47df-892f-2a4a6200f733-lib-modules\") pod \"cilium-8t7jh\" (UID: \"df46d157-c6d1-47df-892f-2a4a6200f733\") " pod="kube-system/cilium-8t7jh" Sep 6 00:22:14.178622 kubelet[2795]: I0906 00:22:14.178596 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/df46d157-c6d1-47df-892f-2a4a6200f733-host-proc-sys-kernel\") pod \"cilium-8t7jh\" (UID: \"df46d157-c6d1-47df-892f-2a4a6200f733\") " pod="kube-system/cilium-8t7jh" Sep 6 00:22:14.178960 kubelet[2795]: I0906 00:22:14.178797 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njdq5\" (UniqueName: \"kubernetes.io/projected/df46d157-c6d1-47df-892f-2a4a6200f733-kube-api-access-njdq5\") pod \"cilium-8t7jh\" (UID: \"df46d157-c6d1-47df-892f-2a4a6200f733\") " pod="kube-system/cilium-8t7jh" Sep 6 00:22:14.178960 kubelet[2795]: I0906 00:22:14.178838 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/df46d157-c6d1-47df-892f-2a4a6200f733-bpf-maps\") pod \"cilium-8t7jh\" (UID: \"df46d157-c6d1-47df-892f-2a4a6200f733\") " pod="kube-system/cilium-8t7jh" Sep 6 00:22:14.178960 kubelet[2795]: I0906 00:22:14.178877 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/df46d157-c6d1-47df-892f-2a4a6200f733-hostproc\") pod \"cilium-8t7jh\" (UID: \"df46d157-c6d1-47df-892f-2a4a6200f733\") " pod="kube-system/cilium-8t7jh" Sep 6 00:22:14.178960 kubelet[2795]: I0906 00:22:14.178891 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/df46d157-c6d1-47df-892f-2a4a6200f733-cni-path\") pod \"cilium-8t7jh\" (UID: \"df46d157-c6d1-47df-892f-2a4a6200f733\") " pod="kube-system/cilium-8t7jh" Sep 6 00:22:14.179131 kubelet[2795]: I0906 00:22:14.178910 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/df46d157-c6d1-47df-892f-2a4a6200f733-hubble-tls\") pod \"cilium-8t7jh\" (UID: \"df46d157-c6d1-47df-892f-2a4a6200f733\") " pod="kube-system/cilium-8t7jh" Sep 6 00:22:14.179537 kubelet[2795]: I0906 00:22:14.179175 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/df46d157-c6d1-47df-892f-2a4a6200f733-etc-cni-netd\") pod \"cilium-8t7jh\" (UID: \"df46d157-c6d1-47df-892f-2a4a6200f733\") " pod="kube-system/cilium-8t7jh" Sep 6 00:22:14.179537 kubelet[2795]: I0906 00:22:14.179198 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/df46d157-c6d1-47df-892f-2a4a6200f733-cilium-config-path\") pod \"cilium-8t7jh\" (UID: \"df46d157-c6d1-47df-892f-2a4a6200f733\") " pod="kube-system/cilium-8t7jh" Sep 6 00:22:14.179537 kubelet[2795]: I0906 00:22:14.179395 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/df46d157-c6d1-47df-892f-2a4a6200f733-clustermesh-secrets\") pod \"cilium-8t7jh\" (UID: \"df46d157-c6d1-47df-892f-2a4a6200f733\") " pod="kube-system/cilium-8t7jh" Sep 6 00:22:14.180385 kubelet[2795]: I0906 00:22:14.179907 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/df46d157-c6d1-47df-892f-2a4a6200f733-cilium-run\") pod \"cilium-8t7jh\" (UID: \"df46d157-c6d1-47df-892f-2a4a6200f733\") " pod="kube-system/cilium-8t7jh" Sep 6 00:22:14.180385 kubelet[2795]: I0906 00:22:14.180318 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/df46d157-c6d1-47df-892f-2a4a6200f733-xtables-lock\") pod \"cilium-8t7jh\" (UID: \"df46d157-c6d1-47df-892f-2a4a6200f733\") " pod="kube-system/cilium-8t7jh" Sep 6 00:22:14.312542 sshd[4529]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:14.316100 systemd[1]: sshd@20-91.98.90.164:22-139.178.68.195:53030.service: Deactivated successfully. Sep 6 00:22:14.321367 systemd-logind[1579]: Session 21 logged out. Waiting for processes to exit. Sep 6 00:22:14.322239 systemd[1]: session-21.scope: Deactivated successfully. Sep 6 00:22:14.323637 systemd-logind[1579]: Removed session 21. Sep 6 00:22:14.484255 systemd[1]: Started sshd@21-91.98.90.164:22-139.178.68.195:53042.service - OpenSSH per-connection server daemon (139.178.68.195:53042). Sep 6 00:22:15.282516 kubelet[2795]: E0906 00:22:15.282349 2795 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Sep 6 00:22:15.282516 kubelet[2795]: E0906 00:22:15.282484 2795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/df46d157-c6d1-47df-892f-2a4a6200f733-cilium-config-path podName:df46d157-c6d1-47df-892f-2a4a6200f733 nodeName:}" failed. No retries permitted until 2025-09-06 00:22:15.782459074 +0000 UTC m=+207.388105833 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/df46d157-c6d1-47df-892f-2a4a6200f733-cilium-config-path") pod "cilium-8t7jh" (UID: "df46d157-c6d1-47df-892f-2a4a6200f733") : failed to sync configmap cache: timed out waiting for the condition Sep 6 00:22:15.283640 kubelet[2795]: E0906 00:22:15.282647 2795 secret.go:189] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Sep 6 00:22:15.283640 kubelet[2795]: E0906 00:22:15.282692 2795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df46d157-c6d1-47df-892f-2a4a6200f733-cilium-ipsec-secrets podName:df46d157-c6d1-47df-892f-2a4a6200f733 nodeName:}" failed. No retries permitted until 2025-09-06 00:22:15.782680397 +0000 UTC m=+207.388327156 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/df46d157-c6d1-47df-892f-2a4a6200f733-cilium-ipsec-secrets") pod "cilium-8t7jh" (UID: "df46d157-c6d1-47df-892f-2a4a6200f733") : failed to sync secret cache: timed out waiting for the condition Sep 6 00:22:15.283640 kubelet[2795]: E0906 00:22:15.282878 2795 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Sep 6 00:22:15.283640 kubelet[2795]: E0906 00:22:15.282893 2795 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-8t7jh: failed to sync secret cache: timed out waiting for the condition Sep 6 00:22:15.283640 kubelet[2795]: E0906 00:22:15.282927 2795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/df46d157-c6d1-47df-892f-2a4a6200f733-hubble-tls podName:df46d157-c6d1-47df-892f-2a4a6200f733 nodeName:}" failed. No retries permitted until 2025-09-06 00:22:15.782915879 +0000 UTC m=+207.388562638 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/df46d157-c6d1-47df-892f-2a4a6200f733-hubble-tls") pod "cilium-8t7jh" (UID: "df46d157-c6d1-47df-892f-2a4a6200f733") : failed to sync secret cache: timed out waiting for the condition Sep 6 00:22:15.478452 sshd[4543]: Accepted publickey for core from 139.178.68.195 port 53042 ssh2: RSA SHA256:jxc91lYC6jGmo2vsfpcbx31/qXJlPFNhK53iVaWpnSg Sep 6 00:22:15.480868 sshd[4543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:22:15.486500 systemd-logind[1579]: New session 22 of user core. Sep 6 00:22:15.494227 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 6 00:22:15.940052 containerd[1598]: time="2025-09-06T00:22:15.939968461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8t7jh,Uid:df46d157-c6d1-47df-892f-2a4a6200f733,Namespace:kube-system,Attempt:0,}" Sep 6 00:22:15.965160 containerd[1598]: time="2025-09-06T00:22:15.964761800Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:22:15.965160 containerd[1598]: time="2025-09-06T00:22:15.964834320Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:22:15.965160 containerd[1598]: time="2025-09-06T00:22:15.964853681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:22:15.965160 containerd[1598]: time="2025-09-06T00:22:15.965012523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:22:16.015153 containerd[1598]: time="2025-09-06T00:22:16.015088763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8t7jh,Uid:df46d157-c6d1-47df-892f-2a4a6200f733,Namespace:kube-system,Attempt:0,} returns sandbox id \"2fef50bfbea9d355d5b3ab5ce1d596a0c845aede93b2e310102359f71595566b\"" Sep 6 00:22:16.019166 containerd[1598]: time="2025-09-06T00:22:16.019119131Z" level=info msg="CreateContainer within sandbox \"2fef50bfbea9d355d5b3ab5ce1d596a0c845aede93b2e310102359f71595566b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:22:16.036507 containerd[1598]: time="2025-09-06T00:22:16.036351615Z" level=info msg="CreateContainer within sandbox \"2fef50bfbea9d355d5b3ab5ce1d596a0c845aede93b2e310102359f71595566b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4d1c0acac21dd7c79c0ed4d369aa5cde2ec940bff5b731a82e2cd7b9c6c0bbfc\"" Sep 6 00:22:16.038909 containerd[1598]: time="2025-09-06T00:22:16.037928874Z" level=info msg="StartContainer for \"4d1c0acac21dd7c79c0ed4d369aa5cde2ec940bff5b731a82e2cd7b9c6c0bbfc\"" Sep 6 00:22:16.087641 containerd[1598]: time="2025-09-06T00:22:16.087592664Z" level=info msg="StartContainer for \"4d1c0acac21dd7c79c0ed4d369aa5cde2ec940bff5b731a82e2cd7b9c6c0bbfc\" returns successfully" Sep 6 00:22:16.124958 containerd[1598]: time="2025-09-06T00:22:16.124742185Z" level=info msg="shim disconnected" id=4d1c0acac21dd7c79c0ed4d369aa5cde2ec940bff5b731a82e2cd7b9c6c0bbfc namespace=k8s.io Sep 6 00:22:16.124958 containerd[1598]: time="2025-09-06T00:22:16.124796786Z" level=warning msg="cleaning up after shim disconnected" id=4d1c0acac21dd7c79c0ed4d369aa5cde2ec940bff5b731a82e2cd7b9c6c0bbfc namespace=k8s.io Sep 6 00:22:16.124958 containerd[1598]: time="2025-09-06T00:22:16.124806066Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:22:16.171836 sshd[4543]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:16.176614 systemd[1]: sshd@21-91.98.90.164:22-139.178.68.195:53042.service: Deactivated successfully. Sep 6 00:22:16.180667 systemd[1]: session-22.scope: Deactivated successfully. Sep 6 00:22:16.183222 systemd-logind[1579]: Session 22 logged out. Waiting for processes to exit. Sep 6 00:22:16.184575 systemd-logind[1579]: Removed session 22. Sep 6 00:22:16.211565 containerd[1598]: time="2025-09-06T00:22:16.210329162Z" level=info msg="CreateContainer within sandbox \"2fef50bfbea9d355d5b3ab5ce1d596a0c845aede93b2e310102359f71595566b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 00:22:16.222253 containerd[1598]: time="2025-09-06T00:22:16.222191343Z" level=info msg="CreateContainer within sandbox \"2fef50bfbea9d355d5b3ab5ce1d596a0c845aede93b2e310102359f71595566b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"75e544fe33ff822a7b81207d6c03e7d838948d6d9e1c4d4e828afb5e5cac5757\"" Sep 6 00:22:16.223063 containerd[1598]: time="2025-09-06T00:22:16.223031193Z" level=info msg="StartContainer for \"75e544fe33ff822a7b81207d6c03e7d838948d6d9e1c4d4e828afb5e5cac5757\"" Sep 6 00:22:16.281551 containerd[1598]: time="2025-09-06T00:22:16.281383046Z" level=info msg="StartContainer for \"75e544fe33ff822a7b81207d6c03e7d838948d6d9e1c4d4e828afb5e5cac5757\" returns successfully" Sep 6 00:22:16.316695 containerd[1598]: time="2025-09-06T00:22:16.316614504Z" level=info msg="shim disconnected" id=75e544fe33ff822a7b81207d6c03e7d838948d6d9e1c4d4e828afb5e5cac5757 namespace=k8s.io Sep 6 00:22:16.316695 containerd[1598]: time="2025-09-06T00:22:16.316678025Z" level=warning msg="cleaning up after shim disconnected" id=75e544fe33ff822a7b81207d6c03e7d838948d6d9e1c4d4e828afb5e5cac5757 namespace=k8s.io Sep 6 00:22:16.316695 containerd[1598]: time="2025-09-06T00:22:16.316687065Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:22:16.346019 systemd[1]: Started sshd@22-91.98.90.164:22-139.178.68.195:53052.service - OpenSSH per-connection server daemon (139.178.68.195:53052). Sep 6 00:22:17.217246 containerd[1598]: time="2025-09-06T00:22:17.216632643Z" level=info msg="CreateContainer within sandbox \"2fef50bfbea9d355d5b3ab5ce1d596a0c845aede93b2e310102359f71595566b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 00:22:17.236703 containerd[1598]: time="2025-09-06T00:22:17.236563357Z" level=info msg="CreateContainer within sandbox \"2fef50bfbea9d355d5b3ab5ce1d596a0c845aede93b2e310102359f71595566b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b64b9d5711c1722cd1f1a730f5eb9276c5309bde8851cb3a30533d277c26419d\"" Sep 6 00:22:17.238630 containerd[1598]: time="2025-09-06T00:22:17.238414979Z" level=info msg="StartContainer for \"b64b9d5711c1722cd1f1a730f5eb9276c5309bde8851cb3a30533d277c26419d\"" Sep 6 00:22:17.306152 containerd[1598]: time="2025-09-06T00:22:17.306103213Z" level=info msg="StartContainer for \"b64b9d5711c1722cd1f1a730f5eb9276c5309bde8851cb3a30533d277c26419d\" returns successfully" Sep 6 00:22:17.330304 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b64b9d5711c1722cd1f1a730f5eb9276c5309bde8851cb3a30533d277c26419d-rootfs.mount: Deactivated successfully. Sep 6 00:22:17.337244 containerd[1598]: time="2025-09-06T00:22:17.337010136Z" level=info msg="shim disconnected" id=b64b9d5711c1722cd1f1a730f5eb9276c5309bde8851cb3a30533d277c26419d namespace=k8s.io Sep 6 00:22:17.337244 containerd[1598]: time="2025-09-06T00:22:17.337073376Z" level=warning msg="cleaning up after shim disconnected" id=b64b9d5711c1722cd1f1a730f5eb9276c5309bde8851cb3a30533d277c26419d namespace=k8s.io Sep 6 00:22:17.337244 containerd[1598]: time="2025-09-06T00:22:17.337083497Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:22:17.342546 sshd[4717]: Accepted publickey for core from 139.178.68.195 port 53052 ssh2: RSA SHA256:jxc91lYC6jGmo2vsfpcbx31/qXJlPFNhK53iVaWpnSg Sep 6 00:22:17.344127 sshd[4717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:22:17.349900 systemd-logind[1579]: New session 23 of user core. Sep 6 00:22:17.353826 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 6 00:22:18.226750 containerd[1598]: time="2025-09-06T00:22:18.226700461Z" level=info msg="CreateContainer within sandbox \"2fef50bfbea9d355d5b3ab5ce1d596a0c845aede93b2e310102359f71595566b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 00:22:18.255350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1905401076.mount: Deactivated successfully. Sep 6 00:22:18.258864 containerd[1598]: time="2025-09-06T00:22:18.258816593Z" level=info msg="CreateContainer within sandbox \"2fef50bfbea9d355d5b3ab5ce1d596a0c845aede93b2e310102359f71595566b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"910f700a6ad1165c555d785e2a0a45d5d3da6ff41343045d46e726c6d6322a54\"" Sep 6 00:22:18.262075 containerd[1598]: time="2025-09-06T00:22:18.262029111Z" level=info msg="StartContainer for \"910f700a6ad1165c555d785e2a0a45d5d3da6ff41343045d46e726c6d6322a54\"" Sep 6 00:22:18.361902 containerd[1598]: time="2025-09-06T00:22:18.361860628Z" level=info msg="StartContainer for \"910f700a6ad1165c555d785e2a0a45d5d3da6ff41343045d46e726c6d6322a54\" returns successfully" Sep 6 00:22:18.383368 containerd[1598]: time="2025-09-06T00:22:18.383290436Z" level=info msg="shim disconnected" id=910f700a6ad1165c555d785e2a0a45d5d3da6ff41343045d46e726c6d6322a54 namespace=k8s.io Sep 6 00:22:18.383735 containerd[1598]: time="2025-09-06T00:22:18.383444278Z" level=warning msg="cleaning up after shim disconnected" id=910f700a6ad1165c555d785e2a0a45d5d3da6ff41343045d46e726c6d6322a54 namespace=k8s.io Sep 6 00:22:18.383735 containerd[1598]: time="2025-09-06T00:22:18.383459318Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:22:18.658587 kubelet[2795]: E0906 00:22:18.658236 2795 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:22:19.237789 containerd[1598]: time="2025-09-06T00:22:19.237308179Z" level=info msg="CreateContainer within sandbox \"2fef50bfbea9d355d5b3ab5ce1d596a0c845aede93b2e310102359f71595566b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 00:22:19.246180 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-910f700a6ad1165c555d785e2a0a45d5d3da6ff41343045d46e726c6d6322a54-rootfs.mount: Deactivated successfully. Sep 6 00:22:19.279827 containerd[1598]: time="2025-09-06T00:22:19.279773825Z" level=info msg="CreateContainer within sandbox \"2fef50bfbea9d355d5b3ab5ce1d596a0c845aede93b2e310102359f71595566b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6fac89158c7b04e42394f0f197b68419d47116d32a96aa091765f088c6d11b72\"" Sep 6 00:22:19.284400 containerd[1598]: time="2025-09-06T00:22:19.281018600Z" level=info msg="StartContainer for \"6fac89158c7b04e42394f0f197b68419d47116d32a96aa091765f088c6d11b72\"" Sep 6 00:22:19.282852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3370111521.mount: Deactivated successfully. Sep 6 00:22:19.353140 containerd[1598]: time="2025-09-06T00:22:19.353091985Z" level=info msg="StartContainer for \"6fac89158c7b04e42394f0f197b68419d47116d32a96aa091765f088c6d11b72\" returns successfully" Sep 6 00:22:19.662245 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 6 00:22:20.272790 kubelet[2795]: I0906 00:22:20.271066 2795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8t7jh" podStartSLOduration=6.271041735 podStartE2EDuration="6.271041735s" podCreationTimestamp="2025-09-06 00:22:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:22:20.27064169 +0000 UTC m=+211.876288489" watchObservedRunningTime="2025-09-06 00:22:20.271041735 +0000 UTC m=+211.876688494" Sep 6 00:22:22.595916 systemd-networkd[1245]: lxc_health: Link UP Sep 6 00:22:22.600868 systemd-networkd[1245]: lxc_health: Gained carrier Sep 6 00:22:22.693307 kubelet[2795]: I0906 00:22:22.693263 2795 setters.go:600] "Node became not ready" node="ci-4081-3-5-n-5ce2877658" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-06T00:22:22Z","lastTransitionTime":"2025-09-06T00:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 6 00:22:24.211613 systemd-networkd[1245]: lxc_health: Gained IPv6LL Sep 6 00:22:24.291540 systemd[1]: run-containerd-runc-k8s.io-6fac89158c7b04e42394f0f197b68419d47116d32a96aa091765f088c6d11b72-runc.sXgQiS.mount: Deactivated successfully. Sep 6 00:22:28.621259 systemd[1]: run-containerd-runc-k8s.io-6fac89158c7b04e42394f0f197b68419d47116d32a96aa091765f088c6d11b72-runc.pxjAK0.mount: Deactivated successfully. Sep 6 00:22:28.677140 kubelet[2795]: E0906 00:22:28.676960 2795 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:54746->127.0.0.1:42977: write tcp 127.0.0.1:54746->127.0.0.1:42977: write: broken pipe Sep 6 00:22:28.839837 sshd[4717]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:28.844649 systemd-logind[1579]: Session 23 logged out. Waiting for processes to exit. Sep 6 00:22:28.845450 systemd[1]: sshd@22-91.98.90.164:22-139.178.68.195:53052.service: Deactivated successfully. Sep 6 00:22:28.850441 systemd[1]: session-23.scope: Deactivated successfully. Sep 6 00:22:28.851652 systemd-logind[1579]: Removed session 23.