Sep 5 23:50:08.898467 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 5 23:50:08.898493 kernel: Linux version 6.6.103-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 5 22:30:47 -00 2025 Sep 5 23:50:08.898503 kernel: KASLR enabled Sep 5 23:50:08.898509 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Sep 5 23:50:08.898515 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Sep 5 23:50:08.898521 kernel: random: crng init done Sep 5 23:50:08.898528 kernel: ACPI: Early table checksum verification disabled Sep 5 23:50:08.898533 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Sep 5 23:50:08.898540 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Sep 5 23:50:08.898547 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:50:08.898554 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:50:08.898559 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:50:08.898565 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:50:08.898571 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:50:08.898579 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:50:08.898586 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:50:08.898595 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:50:08.898602 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:50:08.898609 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Sep 5 23:50:08.898615 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Sep 5 23:50:08.898635 kernel: NUMA: Failed to initialise from firmware Sep 5 23:50:08.898642 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Sep 5 23:50:08.898649 kernel: NUMA: NODE_DATA [mem 0x13966e800-0x139673fff] Sep 5 23:50:08.898655 kernel: Zone ranges: Sep 5 23:50:08.898661 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 5 23:50:08.898670 kernel: DMA32 empty Sep 5 23:50:08.898676 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Sep 5 23:50:08.898682 kernel: Movable zone start for each node Sep 5 23:50:08.898688 kernel: Early memory node ranges Sep 5 23:50:08.898694 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Sep 5 23:50:08.898701 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Sep 5 23:50:08.898707 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Sep 5 23:50:08.898713 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Sep 5 23:50:08.898719 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Sep 5 23:50:08.898726 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Sep 5 23:50:08.898732 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Sep 5 23:50:08.898738 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Sep 5 23:50:08.898746 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Sep 5 23:50:08.898752 kernel: psci: probing for conduit method from ACPI. Sep 5 23:50:08.898759 kernel: psci: PSCIv1.1 detected in firmware. Sep 5 23:50:08.898768 kernel: psci: Using standard PSCI v0.2 function IDs Sep 5 23:50:08.898774 kernel: psci: Trusted OS migration not required Sep 5 23:50:08.898781 kernel: psci: SMC Calling Convention v1.1 Sep 5 23:50:08.898789 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 5 23:50:08.898796 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 5 23:50:08.898803 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 5 23:50:08.898810 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 5 23:50:08.898817 kernel: Detected PIPT I-cache on CPU0 Sep 5 23:50:08.898824 kernel: CPU features: detected: GIC system register CPU interface Sep 5 23:50:08.898830 kernel: CPU features: detected: Hardware dirty bit management Sep 5 23:50:08.898837 kernel: CPU features: detected: Spectre-v4 Sep 5 23:50:08.898844 kernel: CPU features: detected: Spectre-BHB Sep 5 23:50:08.898851 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 5 23:50:08.898859 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 5 23:50:08.898866 kernel: CPU features: detected: ARM erratum 1418040 Sep 5 23:50:08.898873 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 5 23:50:08.898880 kernel: alternatives: applying boot alternatives Sep 5 23:50:08.898888 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=ac831c89fe9ee7829b7371dadfb138f8d0e2b31ae3a5a920e0eba13bbab016c3 Sep 5 23:50:08.898896 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 5 23:50:08.898903 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 5 23:50:08.898910 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 5 23:50:08.898916 kernel: Fallback order for Node 0: 0 Sep 5 23:50:08.898923 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Sep 5 23:50:08.898930 kernel: Policy zone: Normal Sep 5 23:50:08.898939 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 5 23:50:08.898946 kernel: software IO TLB: area num 2. Sep 5 23:50:08.898953 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Sep 5 23:50:08.898960 kernel: Memory: 3882804K/4096000K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 213196K reserved, 0K cma-reserved) Sep 5 23:50:08.898967 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 5 23:50:08.898974 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 5 23:50:08.898981 kernel: rcu: RCU event tracing is enabled. Sep 5 23:50:08.898988 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 5 23:50:08.898995 kernel: Trampoline variant of Tasks RCU enabled. Sep 5 23:50:08.899002 kernel: Tracing variant of Tasks RCU enabled. Sep 5 23:50:08.899009 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 5 23:50:08.899017 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 5 23:50:08.899024 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 5 23:50:08.899031 kernel: GICv3: 256 SPIs implemented Sep 5 23:50:08.899038 kernel: GICv3: 0 Extended SPIs implemented Sep 5 23:50:08.899044 kernel: Root IRQ handler: gic_handle_irq Sep 5 23:50:08.899051 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 5 23:50:08.899058 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 5 23:50:08.899065 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 5 23:50:08.899072 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Sep 5 23:50:08.899079 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Sep 5 23:50:08.899086 kernel: GICv3: using LPI property table @0x00000001000e0000 Sep 5 23:50:08.899092 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Sep 5 23:50:08.899101 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 5 23:50:08.899107 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 23:50:08.899114 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 5 23:50:08.899135 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 5 23:50:08.899142 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 5 23:50:08.899149 kernel: Console: colour dummy device 80x25 Sep 5 23:50:08.899156 kernel: ACPI: Core revision 20230628 Sep 5 23:50:08.899163 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 5 23:50:08.899170 kernel: pid_max: default: 32768 minimum: 301 Sep 5 23:50:08.899177 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 5 23:50:08.899186 kernel: landlock: Up and running. Sep 5 23:50:08.899193 kernel: SELinux: Initializing. Sep 5 23:50:08.899200 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 23:50:08.899206 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 23:50:08.899213 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 5 23:50:08.899221 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 5 23:50:08.899227 kernel: rcu: Hierarchical SRCU implementation. Sep 5 23:50:08.899234 kernel: rcu: Max phase no-delay instances is 400. Sep 5 23:50:08.899241 kernel: Platform MSI: ITS@0x8080000 domain created Sep 5 23:50:08.899250 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 5 23:50:08.899257 kernel: Remapping and enabling EFI services. Sep 5 23:50:08.899264 kernel: smp: Bringing up secondary CPUs ... Sep 5 23:50:08.899271 kernel: Detected PIPT I-cache on CPU1 Sep 5 23:50:08.899278 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 5 23:50:08.899284 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Sep 5 23:50:08.899291 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 23:50:08.899298 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 5 23:50:08.899305 kernel: smp: Brought up 1 node, 2 CPUs Sep 5 23:50:08.899311 kernel: SMP: Total of 2 processors activated. Sep 5 23:50:08.899320 kernel: CPU features: detected: 32-bit EL0 Support Sep 5 23:50:08.899327 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 5 23:50:08.899339 kernel: CPU features: detected: Common not Private translations Sep 5 23:50:08.899348 kernel: CPU features: detected: CRC32 instructions Sep 5 23:50:08.899355 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 5 23:50:08.899363 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 5 23:50:08.899370 kernel: CPU features: detected: LSE atomic instructions Sep 5 23:50:08.899377 kernel: CPU features: detected: Privileged Access Never Sep 5 23:50:08.899385 kernel: CPU features: detected: RAS Extension Support Sep 5 23:50:08.899394 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 5 23:50:08.899401 kernel: CPU: All CPU(s) started at EL1 Sep 5 23:50:08.899408 kernel: alternatives: applying system-wide alternatives Sep 5 23:50:08.899415 kernel: devtmpfs: initialized Sep 5 23:50:08.899423 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 5 23:50:08.899430 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 5 23:50:08.899437 kernel: pinctrl core: initialized pinctrl subsystem Sep 5 23:50:08.899446 kernel: SMBIOS 3.0.0 present. Sep 5 23:50:08.899453 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Sep 5 23:50:08.899461 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 5 23:50:08.899468 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 5 23:50:08.899476 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 5 23:50:08.899483 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 5 23:50:08.899490 kernel: audit: initializing netlink subsys (disabled) Sep 5 23:50:08.899498 kernel: audit: type=2000 audit(0.012:1): state=initialized audit_enabled=0 res=1 Sep 5 23:50:08.899505 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 5 23:50:08.899514 kernel: cpuidle: using governor menu Sep 5 23:50:08.899522 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 5 23:50:08.899529 kernel: ASID allocator initialised with 32768 entries Sep 5 23:50:08.899536 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 5 23:50:08.899544 kernel: Serial: AMBA PL011 UART driver Sep 5 23:50:08.899551 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 5 23:50:08.899558 kernel: Modules: 0 pages in range for non-PLT usage Sep 5 23:50:08.899565 kernel: Modules: 509008 pages in range for PLT usage Sep 5 23:50:08.899596 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 5 23:50:08.899606 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 5 23:50:08.899613 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 5 23:50:08.899664 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 5 23:50:08.899673 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 5 23:50:08.899680 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 5 23:50:08.899687 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 5 23:50:08.899695 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 5 23:50:08.899702 kernel: ACPI: Added _OSI(Module Device) Sep 5 23:50:08.899709 kernel: ACPI: Added _OSI(Processor Device) Sep 5 23:50:08.899719 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 5 23:50:08.899727 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 5 23:50:08.899734 kernel: ACPI: Interpreter enabled Sep 5 23:50:08.899741 kernel: ACPI: Using GIC for interrupt routing Sep 5 23:50:08.899748 kernel: ACPI: MCFG table detected, 1 entries Sep 5 23:50:08.899756 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 5 23:50:08.899763 kernel: printk: console [ttyAMA0] enabled Sep 5 23:50:08.899770 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 5 23:50:08.899932 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 5 23:50:08.900013 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 5 23:50:08.900083 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 5 23:50:08.900174 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 5 23:50:08.900241 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 5 23:50:08.900251 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 5 23:50:08.900259 kernel: PCI host bridge to bus 0000:00 Sep 5 23:50:08.900333 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 5 23:50:08.900400 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 5 23:50:08.900459 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 5 23:50:08.900518 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 5 23:50:08.900602 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 5 23:50:08.900708 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Sep 5 23:50:08.900779 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Sep 5 23:50:08.900877 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Sep 5 23:50:08.900954 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Sep 5 23:50:08.901023 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Sep 5 23:50:08.901108 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Sep 5 23:50:08.901242 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Sep 5 23:50:08.901322 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Sep 5 23:50:08.901390 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Sep 5 23:50:08.901470 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Sep 5 23:50:08.901536 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Sep 5 23:50:08.901611 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Sep 5 23:50:08.901731 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Sep 5 23:50:08.901812 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Sep 5 23:50:08.901885 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Sep 5 23:50:08.901962 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Sep 5 23:50:08.902029 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Sep 5 23:50:08.902104 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Sep 5 23:50:08.902194 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Sep 5 23:50:08.902277 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Sep 5 23:50:08.902344 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Sep 5 23:50:08.902422 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Sep 5 23:50:08.902494 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Sep 5 23:50:08.902571 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Sep 5 23:50:08.902660 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Sep 5 23:50:08.902732 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 5 23:50:08.902802 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Sep 5 23:50:08.902884 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Sep 5 23:50:08.902955 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Sep 5 23:50:08.903033 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Sep 5 23:50:08.903104 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Sep 5 23:50:08.903191 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Sep 5 23:50:08.903271 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Sep 5 23:50:08.903341 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Sep 5 23:50:08.903423 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Sep 5 23:50:08.903491 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Sep 5 23:50:08.903570 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Sep 5 23:50:08.903670 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Sep 5 23:50:08.903742 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Sep 5 23:50:08.903825 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Sep 5 23:50:08.903899 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Sep 5 23:50:08.903969 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Sep 5 23:50:08.904037 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Sep 5 23:50:08.904106 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Sep 5 23:50:08.904209 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Sep 5 23:50:08.904280 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Sep 5 23:50:08.904355 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Sep 5 23:50:08.904424 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Sep 5 23:50:08.904510 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Sep 5 23:50:08.904583 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Sep 5 23:50:08.904711 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Sep 5 23:50:08.904783 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Sep 5 23:50:08.904854 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Sep 5 23:50:08.904922 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Sep 5 23:50:08.904994 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Sep 5 23:50:08.905064 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Sep 5 23:50:08.905217 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Sep 5 23:50:08.905293 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Sep 5 23:50:08.905361 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Sep 5 23:50:08.905425 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Sep 5 23:50:08.905489 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Sep 5 23:50:08.905561 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 5 23:50:08.905645 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Sep 5 23:50:08.905718 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Sep 5 23:50:08.905789 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 5 23:50:08.905854 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Sep 5 23:50:08.905918 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Sep 5 23:50:08.905989 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 5 23:50:08.906055 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Sep 5 23:50:08.906138 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Sep 5 23:50:08.906211 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Sep 5 23:50:08.906279 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Sep 5 23:50:08.906347 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Sep 5 23:50:08.906414 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Sep 5 23:50:08.906482 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Sep 5 23:50:08.906548 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Sep 5 23:50:08.906661 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Sep 5 23:50:08.906746 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Sep 5 23:50:08.906818 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Sep 5 23:50:08.906886 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Sep 5 23:50:08.906953 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Sep 5 23:50:08.907019 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 5 23:50:08.907092 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Sep 5 23:50:08.907202 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 5 23:50:08.907274 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Sep 5 23:50:08.907341 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 5 23:50:08.907409 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Sep 5 23:50:08.907476 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Sep 5 23:50:08.907547 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Sep 5 23:50:08.907632 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Sep 5 23:50:08.907706 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Sep 5 23:50:08.907772 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Sep 5 23:50:08.907839 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Sep 5 23:50:08.907904 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Sep 5 23:50:08.907970 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Sep 5 23:50:08.908035 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Sep 5 23:50:08.908101 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Sep 5 23:50:08.908300 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Sep 5 23:50:08.908374 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Sep 5 23:50:08.908441 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Sep 5 23:50:08.908506 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Sep 5 23:50:08.908569 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Sep 5 23:50:08.908651 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Sep 5 23:50:08.908720 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Sep 5 23:50:08.908789 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Sep 5 23:50:08.908861 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Sep 5 23:50:08.908926 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Sep 5 23:50:08.908989 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Sep 5 23:50:08.909059 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Sep 5 23:50:08.909226 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Sep 5 23:50:08.909304 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 5 23:50:08.909371 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Sep 5 23:50:08.909436 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Sep 5 23:50:08.909505 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Sep 5 23:50:08.909570 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Sep 5 23:50:08.909691 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Sep 5 23:50:08.909779 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Sep 5 23:50:08.909850 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Sep 5 23:50:08.909921 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Sep 5 23:50:08.909985 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Sep 5 23:50:08.910048 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Sep 5 23:50:08.910152 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Sep 5 23:50:08.910231 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Sep 5 23:50:08.910298 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Sep 5 23:50:08.910364 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Sep 5 23:50:08.910434 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Sep 5 23:50:08.910499 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Sep 5 23:50:08.910571 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Sep 5 23:50:08.910654 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Sep 5 23:50:08.910723 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Sep 5 23:50:08.910789 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Sep 5 23:50:08.910853 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Sep 5 23:50:08.910929 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Sep 5 23:50:08.911002 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Sep 5 23:50:08.911069 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Sep 5 23:50:08.911191 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Sep 5 23:50:08.911260 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Sep 5 23:50:08.911334 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Sep 5 23:50:08.911401 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Sep 5 23:50:08.911466 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Sep 5 23:50:08.911531 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Sep 5 23:50:08.911599 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Sep 5 23:50:08.911721 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 5 23:50:08.911801 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Sep 5 23:50:08.911872 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Sep 5 23:50:08.911944 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Sep 5 23:50:08.912011 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Sep 5 23:50:08.912076 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Sep 5 23:50:08.914299 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Sep 5 23:50:08.914411 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 5 23:50:08.914483 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Sep 5 23:50:08.914551 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Sep 5 23:50:08.914617 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Sep 5 23:50:08.914761 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 5 23:50:08.914836 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Sep 5 23:50:08.914906 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Sep 5 23:50:08.914978 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Sep 5 23:50:08.915055 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Sep 5 23:50:08.915433 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 5 23:50:08.915511 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 5 23:50:08.915571 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 5 23:50:08.915679 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Sep 5 23:50:08.915747 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Sep 5 23:50:08.915808 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Sep 5 23:50:08.915888 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Sep 5 23:50:08.915950 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Sep 5 23:50:08.916011 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Sep 5 23:50:08.916087 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Sep 5 23:50:08.916235 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Sep 5 23:50:08.916298 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Sep 5 23:50:08.916377 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Sep 5 23:50:08.916438 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Sep 5 23:50:08.916501 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Sep 5 23:50:08.916590 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Sep 5 23:50:08.916721 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Sep 5 23:50:08.916787 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Sep 5 23:50:08.916860 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Sep 5 23:50:08.916927 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Sep 5 23:50:08.916987 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 5 23:50:08.917060 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Sep 5 23:50:08.917537 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Sep 5 23:50:08.918284 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 5 23:50:08.918387 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Sep 5 23:50:08.918456 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Sep 5 23:50:08.918521 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 5 23:50:08.918599 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Sep 5 23:50:08.918731 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Sep 5 23:50:08.918804 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Sep 5 23:50:08.918821 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 5 23:50:08.918832 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 5 23:50:08.918840 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 5 23:50:08.918848 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 5 23:50:08.918856 kernel: iommu: Default domain type: Translated Sep 5 23:50:08.918864 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 5 23:50:08.918872 kernel: efivars: Registered efivars operations Sep 5 23:50:08.918879 kernel: vgaarb: loaded Sep 5 23:50:08.918887 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 5 23:50:08.918897 kernel: VFS: Disk quotas dquot_6.6.0 Sep 5 23:50:08.918905 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 5 23:50:08.918913 kernel: pnp: PnP ACPI init Sep 5 23:50:08.919009 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 5 23:50:08.919021 kernel: pnp: PnP ACPI: found 1 devices Sep 5 23:50:08.919029 kernel: NET: Registered PF_INET protocol family Sep 5 23:50:08.919037 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 5 23:50:08.919045 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 5 23:50:08.919055 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 5 23:50:08.919063 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 5 23:50:08.919071 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 5 23:50:08.919079 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 5 23:50:08.919087 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 23:50:08.919095 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 23:50:08.919103 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 5 23:50:08.920364 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Sep 5 23:50:08.920391 kernel: PCI: CLS 0 bytes, default 64 Sep 5 23:50:08.920409 kernel: kvm [1]: HYP mode not available Sep 5 23:50:08.920419 kernel: Initialise system trusted keyrings Sep 5 23:50:08.920428 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 5 23:50:08.920436 kernel: Key type asymmetric registered Sep 5 23:50:08.920444 kernel: Asymmetric key parser 'x509' registered Sep 5 23:50:08.920452 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 5 23:50:08.920459 kernel: io scheduler mq-deadline registered Sep 5 23:50:08.920467 kernel: io scheduler kyber registered Sep 5 23:50:08.920475 kernel: io scheduler bfq registered Sep 5 23:50:08.920486 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 5 23:50:08.920568 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Sep 5 23:50:08.920655 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Sep 5 23:50:08.920728 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 23:50:08.920802 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Sep 5 23:50:08.920869 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Sep 5 23:50:08.920937 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 23:50:08.921016 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Sep 5 23:50:08.921084 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Sep 5 23:50:08.921316 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 23:50:08.921398 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Sep 5 23:50:08.921470 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Sep 5 23:50:08.921544 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 23:50:08.921614 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Sep 5 23:50:08.921750 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Sep 5 23:50:08.921818 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 23:50:08.921888 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Sep 5 23:50:08.921954 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Sep 5 23:50:08.922023 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 23:50:08.922099 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Sep 5 23:50:08.922214 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Sep 5 23:50:08.922281 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 23:50:08.922351 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Sep 5 23:50:08.922417 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Sep 5 23:50:08.922490 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 23:50:08.922501 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Sep 5 23:50:08.922588 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Sep 5 23:50:08.924865 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Sep 5 23:50:08.924991 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 23:50:08.925004 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 5 23:50:08.925012 kernel: ACPI: button: Power Button [PWRB] Sep 5 23:50:08.925022 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 5 23:50:08.925110 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Sep 5 23:50:08.925246 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Sep 5 23:50:08.925259 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 5 23:50:08.925268 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 5 23:50:08.925339 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Sep 5 23:50:08.925350 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Sep 5 23:50:08.925358 kernel: thunder_xcv, ver 1.0 Sep 5 23:50:08.925365 kernel: thunder_bgx, ver 1.0 Sep 5 23:50:08.925376 kernel: nicpf, ver 1.0 Sep 5 23:50:08.925384 kernel: nicvf, ver 1.0 Sep 5 23:50:08.925469 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 5 23:50:08.925542 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-05T23:50:08 UTC (1757116208) Sep 5 23:50:08.925554 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 5 23:50:08.925562 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 5 23:50:08.925570 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 5 23:50:08.925579 kernel: watchdog: Hard watchdog permanently disabled Sep 5 23:50:08.925593 kernel: NET: Registered PF_INET6 protocol family Sep 5 23:50:08.925603 kernel: Segment Routing with IPv6 Sep 5 23:50:08.925613 kernel: In-situ OAM (IOAM) with IPv6 Sep 5 23:50:08.925636 kernel: NET: Registered PF_PACKET protocol family Sep 5 23:50:08.925644 kernel: Key type dns_resolver registered Sep 5 23:50:08.925652 kernel: registered taskstats version 1 Sep 5 23:50:08.925659 kernel: Loading compiled-in X.509 certificates Sep 5 23:50:08.925667 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.103-flatcar: 5b16e1dfa86dac534548885fd675b87757ff9e20' Sep 5 23:50:08.925675 kernel: Key type .fscrypt registered Sep 5 23:50:08.925682 kernel: Key type fscrypt-provisioning registered Sep 5 23:50:08.925693 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 5 23:50:08.925701 kernel: ima: Allocated hash algorithm: sha1 Sep 5 23:50:08.925709 kernel: ima: No architecture policies found Sep 5 23:50:08.925717 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 5 23:50:08.925724 kernel: clk: Disabling unused clocks Sep 5 23:50:08.925732 kernel: Freeing unused kernel memory: 39424K Sep 5 23:50:08.925740 kernel: Run /init as init process Sep 5 23:50:08.925748 kernel: with arguments: Sep 5 23:50:08.925760 kernel: /init Sep 5 23:50:08.925769 kernel: with environment: Sep 5 23:50:08.925777 kernel: HOME=/ Sep 5 23:50:08.925786 kernel: TERM=linux Sep 5 23:50:08.925794 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 5 23:50:08.925804 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 23:50:08.925814 systemd[1]: Detected virtualization kvm. Sep 5 23:50:08.925823 systemd[1]: Detected architecture arm64. Sep 5 23:50:08.925833 systemd[1]: Running in initrd. Sep 5 23:50:08.925841 systemd[1]: No hostname configured, using default hostname. Sep 5 23:50:08.925849 systemd[1]: Hostname set to . Sep 5 23:50:08.925858 systemd[1]: Initializing machine ID from VM UUID. Sep 5 23:50:08.925866 systemd[1]: Queued start job for default target initrd.target. Sep 5 23:50:08.925875 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 23:50:08.925883 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 23:50:08.925892 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 5 23:50:08.925903 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 23:50:08.925911 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 5 23:50:08.925920 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 5 23:50:08.925930 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 5 23:50:08.925939 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 5 23:50:08.925948 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 23:50:08.925956 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 23:50:08.925967 systemd[1]: Reached target paths.target - Path Units. Sep 5 23:50:08.925975 systemd[1]: Reached target slices.target - Slice Units. Sep 5 23:50:08.925983 systemd[1]: Reached target swap.target - Swaps. Sep 5 23:50:08.925993 systemd[1]: Reached target timers.target - Timer Units. Sep 5 23:50:08.926002 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 23:50:08.926011 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 23:50:08.926019 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 5 23:50:08.926028 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 5 23:50:08.926038 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 23:50:08.926047 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 23:50:08.926055 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 23:50:08.926063 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 23:50:08.926072 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 5 23:50:08.926080 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 23:50:08.926089 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 5 23:50:08.926097 systemd[1]: Starting systemd-fsck-usr.service... Sep 5 23:50:08.926105 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 23:50:08.926127 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 23:50:08.926136 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:50:08.926145 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 5 23:50:08.926180 systemd-journald[236]: Collecting audit messages is disabled. Sep 5 23:50:08.926205 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 23:50:08.926213 systemd[1]: Finished systemd-fsck-usr.service. Sep 5 23:50:08.926222 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 23:50:08.926232 systemd-journald[236]: Journal started Sep 5 23:50:08.926255 systemd-journald[236]: Runtime Journal (/run/log/journal/6edfd1a5ddcc4b18be6b8bc689fe6809) is 8.0M, max 76.6M, 68.6M free. Sep 5 23:50:08.920378 systemd-modules-load[237]: Inserted module 'overlay' Sep 5 23:50:08.930734 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 23:50:08.938157 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 5 23:50:08.939714 systemd-modules-load[237]: Inserted module 'br_netfilter' Sep 5 23:50:08.940375 kernel: Bridge firewalling registered Sep 5 23:50:08.944494 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 23:50:08.948262 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 23:50:08.950183 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:50:08.951964 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 23:50:08.954878 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 23:50:08.963425 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 23:50:08.969360 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 23:50:08.970960 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 23:50:08.990271 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:50:09.000504 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 5 23:50:09.002502 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 23:50:09.004940 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 23:50:09.011952 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 23:50:09.019231 dracut-cmdline[269]: dracut-dracut-053 Sep 5 23:50:09.022094 dracut-cmdline[269]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=ac831c89fe9ee7829b7371dadfb138f8d0e2b31ae3a5a920e0eba13bbab016c3 Sep 5 23:50:09.057536 systemd-resolved[277]: Positive Trust Anchors: Sep 5 23:50:09.057557 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 23:50:09.057591 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 23:50:09.064148 systemd-resolved[277]: Defaulting to hostname 'linux'. Sep 5 23:50:09.066308 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 23:50:09.067022 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 23:50:09.139191 kernel: SCSI subsystem initialized Sep 5 23:50:09.144166 kernel: Loading iSCSI transport class v2.0-870. Sep 5 23:50:09.152169 kernel: iscsi: registered transport (tcp) Sep 5 23:50:09.168201 kernel: iscsi: registered transport (qla4xxx) Sep 5 23:50:09.168315 kernel: QLogic iSCSI HBA Driver Sep 5 23:50:09.217050 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 5 23:50:09.225400 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 5 23:50:09.245308 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 5 23:50:09.245379 kernel: device-mapper: uevent: version 1.0.3 Sep 5 23:50:09.246375 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 5 23:50:09.297176 kernel: raid6: neonx8 gen() 15641 MB/s Sep 5 23:50:09.314184 kernel: raid6: neonx4 gen() 15591 MB/s Sep 5 23:50:09.331158 kernel: raid6: neonx2 gen() 13120 MB/s Sep 5 23:50:09.348172 kernel: raid6: neonx1 gen() 10450 MB/s Sep 5 23:50:09.365187 kernel: raid6: int64x8 gen() 6927 MB/s Sep 5 23:50:09.382178 kernel: raid6: int64x4 gen() 7296 MB/s Sep 5 23:50:09.399164 kernel: raid6: int64x2 gen() 6101 MB/s Sep 5 23:50:09.416164 kernel: raid6: int64x1 gen() 5037 MB/s Sep 5 23:50:09.416204 kernel: raid6: using algorithm neonx8 gen() 15641 MB/s Sep 5 23:50:09.433171 kernel: raid6: .... xor() 11822 MB/s, rmw enabled Sep 5 23:50:09.433224 kernel: raid6: using neon recovery algorithm Sep 5 23:50:09.438387 kernel: xor: measuring software checksum speed Sep 5 23:50:09.438435 kernel: 8regs : 19783 MB/sec Sep 5 23:50:09.439257 kernel: 32regs : 19655 MB/sec Sep 5 23:50:09.439290 kernel: arm64_neon : 26247 MB/sec Sep 5 23:50:09.439308 kernel: xor: using function: arm64_neon (26247 MB/sec) Sep 5 23:50:09.491234 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 5 23:50:09.506452 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 5 23:50:09.514420 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 23:50:09.530413 systemd-udevd[455]: Using default interface naming scheme 'v255'. Sep 5 23:50:09.534044 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 23:50:09.543874 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 5 23:50:09.561835 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Sep 5 23:50:09.600094 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 23:50:09.605347 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 23:50:09.658409 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 23:50:09.667714 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 5 23:50:09.691882 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 5 23:50:09.693850 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 23:50:09.695789 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 23:50:09.696870 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 23:50:09.709584 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 5 23:50:09.726428 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 5 23:50:09.766371 kernel: scsi host0: Virtio SCSI HBA Sep 5 23:50:09.782427 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 5 23:50:09.782519 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Sep 5 23:50:09.790140 kernel: ACPI: bus type USB registered Sep 5 23:50:09.791136 kernel: usbcore: registered new interface driver usbfs Sep 5 23:50:09.793133 kernel: usbcore: registered new interface driver hub Sep 5 23:50:09.793175 kernel: usbcore: registered new device driver usb Sep 5 23:50:09.794031 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 23:50:09.794198 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:50:09.801659 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 23:50:09.806431 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 23:50:09.807761 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:50:09.811871 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:50:09.826287 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:50:09.840156 kernel: sr 0:0:0:0: Power-on or device reset occurred Sep 5 23:50:09.846502 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Sep 5 23:50:09.846786 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 5 23:50:09.846799 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 5 23:50:09.848312 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Sep 5 23:50:09.848495 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Sep 5 23:50:09.850186 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 5 23:50:09.850389 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Sep 5 23:50:09.851134 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Sep 5 23:50:09.852176 kernel: hub 1-0:1.0: USB hub found Sep 5 23:50:09.852377 kernel: hub 1-0:1.0: 4 ports detected Sep 5 23:50:09.854133 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Sep 5 23:50:09.854183 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Sep 5 23:50:09.855136 kernel: hub 2-0:1.0: USB hub found Sep 5 23:50:09.855303 kernel: hub 2-0:1.0: 4 ports detected Sep 5 23:50:09.856176 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:50:09.866331 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 23:50:09.883208 kernel: sd 0:0:0:1: Power-on or device reset occurred Sep 5 23:50:09.883407 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Sep 5 23:50:09.884860 kernel: sd 0:0:0:1: [sda] Write Protect is off Sep 5 23:50:09.885094 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Sep 5 23:50:09.885212 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 5 23:50:09.891183 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 5 23:50:09.891254 kernel: GPT:17805311 != 80003071 Sep 5 23:50:09.891265 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 5 23:50:09.893687 kernel: GPT:17805311 != 80003071 Sep 5 23:50:09.893768 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 5 23:50:09.893782 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 5 23:50:09.895215 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Sep 5 23:50:09.900675 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:50:09.940142 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (514) Sep 5 23:50:09.942143 kernel: BTRFS: device fsid 045c118e-b098-46f0-884a-43665575c70e devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (507) Sep 5 23:50:09.954793 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Sep 5 23:50:09.961721 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Sep 5 23:50:09.971325 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 5 23:50:09.978913 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Sep 5 23:50:09.979779 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Sep 5 23:50:09.990377 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 5 23:50:10.005172 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 5 23:50:10.007335 disk-uuid[573]: Primary Header is updated. Sep 5 23:50:10.007335 disk-uuid[573]: Secondary Entries is updated. Sep 5 23:50:10.007335 disk-uuid[573]: Secondary Header is updated. Sep 5 23:50:10.099239 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Sep 5 23:50:10.237917 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Sep 5 23:50:10.237994 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Sep 5 23:50:10.239165 kernel: usbcore: registered new interface driver usbhid Sep 5 23:50:10.239200 kernel: usbhid: USB HID core driver Sep 5 23:50:10.344163 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Sep 5 23:50:10.474152 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Sep 5 23:50:10.527185 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Sep 5 23:50:11.027201 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 5 23:50:11.027273 disk-uuid[574]: The operation has completed successfully. Sep 5 23:50:11.096901 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 5 23:50:11.097953 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 5 23:50:11.103660 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 5 23:50:11.111010 sh[592]: Success Sep 5 23:50:11.128166 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 5 23:50:11.189780 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 5 23:50:11.201473 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 5 23:50:11.206162 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 5 23:50:11.233648 kernel: BTRFS info (device dm-0): first mount of filesystem 045c118e-b098-46f0-884a-43665575c70e Sep 5 23:50:11.233720 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:50:11.233732 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 5 23:50:11.234199 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 5 23:50:11.235134 kernel: BTRFS info (device dm-0): using free space tree Sep 5 23:50:11.241242 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 5 23:50:11.243037 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 5 23:50:11.245283 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 5 23:50:11.254422 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 5 23:50:11.259465 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 5 23:50:11.271400 kernel: BTRFS info (device sda6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:50:11.271471 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:50:11.271483 kernel: BTRFS info (device sda6): using free space tree Sep 5 23:50:11.278661 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 5 23:50:11.278745 kernel: BTRFS info (device sda6): auto enabling async discard Sep 5 23:50:11.291224 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 5 23:50:11.293144 kernel: BTRFS info (device sda6): last unmount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:50:11.305415 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 5 23:50:11.311382 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 5 23:50:11.401951 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 23:50:11.414357 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 23:50:11.425617 ignition[684]: Ignition 2.19.0 Sep 5 23:50:11.428096 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 23:50:11.425632 ignition[684]: Stage: fetch-offline Sep 5 23:50:11.425679 ignition[684]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:50:11.425689 ignition[684]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 5 23:50:11.425872 ignition[684]: parsed url from cmdline: "" Sep 5 23:50:11.425877 ignition[684]: no config URL provided Sep 5 23:50:11.425882 ignition[684]: reading system config file "/usr/lib/ignition/user.ign" Sep 5 23:50:11.425892 ignition[684]: no config at "/usr/lib/ignition/user.ign" Sep 5 23:50:11.425897 ignition[684]: failed to fetch config: resource requires networking Sep 5 23:50:11.426314 ignition[684]: Ignition finished successfully Sep 5 23:50:11.435500 systemd-networkd[780]: lo: Link UP Sep 5 23:50:11.435513 systemd-networkd[780]: lo: Gained carrier Sep 5 23:50:11.437261 systemd-networkd[780]: Enumeration completed Sep 5 23:50:11.437764 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:50:11.437767 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 23:50:11.438778 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:50:11.438782 systemd-networkd[780]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 23:50:11.439397 systemd-networkd[780]: eth0: Link UP Sep 5 23:50:11.439400 systemd-networkd[780]: eth0: Gained carrier Sep 5 23:50:11.439408 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:50:11.439892 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 23:50:11.441475 systemd[1]: Reached target network.target - Network. Sep 5 23:50:11.446514 systemd-networkd[780]: eth1: Link UP Sep 5 23:50:11.446518 systemd-networkd[780]: eth1: Gained carrier Sep 5 23:50:11.446529 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:50:11.450440 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 5 23:50:11.466139 ignition[784]: Ignition 2.19.0 Sep 5 23:50:11.466153 ignition[784]: Stage: fetch Sep 5 23:50:11.466437 ignition[784]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:50:11.466452 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 5 23:50:11.466579 ignition[784]: parsed url from cmdline: "" Sep 5 23:50:11.466601 ignition[784]: no config URL provided Sep 5 23:50:11.466611 ignition[784]: reading system config file "/usr/lib/ignition/user.ign" Sep 5 23:50:11.466622 ignition[784]: no config at "/usr/lib/ignition/user.ign" Sep 5 23:50:11.466652 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Sep 5 23:50:11.467420 ignition[784]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Sep 5 23:50:11.478250 systemd-networkd[780]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 5 23:50:11.505231 systemd-networkd[780]: eth0: DHCPv4 address 128.140.56.156/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 5 23:50:11.668221 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Sep 5 23:50:11.676499 ignition[784]: GET result: OK Sep 5 23:50:11.676662 ignition[784]: parsing config with SHA512: cb0b58c07d6da13f58b39adb249092e813171ed15a8c2f96bc41bc462ccd0ab8d154788f9721c7839eff55e05b87a28728981145ec8e2d12a5618a4b3ff030bb Sep 5 23:50:11.684578 unknown[784]: fetched base config from "system" Sep 5 23:50:11.684611 unknown[784]: fetched base config from "system" Sep 5 23:50:11.685178 ignition[784]: fetch: fetch complete Sep 5 23:50:11.684621 unknown[784]: fetched user config from "hetzner" Sep 5 23:50:11.685183 ignition[784]: fetch: fetch passed Sep 5 23:50:11.688707 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 5 23:50:11.685239 ignition[784]: Ignition finished successfully Sep 5 23:50:11.694386 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 5 23:50:11.708779 ignition[792]: Ignition 2.19.0 Sep 5 23:50:11.708789 ignition[792]: Stage: kargs Sep 5 23:50:11.708980 ignition[792]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:50:11.708990 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 5 23:50:11.710051 ignition[792]: kargs: kargs passed Sep 5 23:50:11.710111 ignition[792]: Ignition finished successfully Sep 5 23:50:11.712204 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 5 23:50:11.720688 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 5 23:50:11.732774 ignition[799]: Ignition 2.19.0 Sep 5 23:50:11.732784 ignition[799]: Stage: disks Sep 5 23:50:11.732989 ignition[799]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:50:11.732998 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 5 23:50:11.734042 ignition[799]: disks: disks passed Sep 5 23:50:11.734104 ignition[799]: Ignition finished successfully Sep 5 23:50:11.737478 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 5 23:50:11.739023 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 5 23:50:11.740667 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 5 23:50:11.741433 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 23:50:11.743324 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 23:50:11.744320 systemd[1]: Reached target basic.target - Basic System. Sep 5 23:50:11.751405 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 5 23:50:11.766991 systemd-fsck[807]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 5 23:50:11.771176 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 5 23:50:11.777250 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 5 23:50:11.831172 kernel: EXT4-fs (sda9): mounted filesystem 72e55cb0-8368-4871-a3a0-8637412e72e8 r/w with ordered data mode. Quota mode: none. Sep 5 23:50:11.831334 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 5 23:50:11.833643 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 5 23:50:11.842411 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 23:50:11.848469 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 5 23:50:11.861900 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 5 23:50:11.865060 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (815) Sep 5 23:50:11.864206 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 5 23:50:11.868419 kernel: BTRFS info (device sda6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:50:11.868449 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:50:11.868460 kernel: BTRFS info (device sda6): using free space tree Sep 5 23:50:11.864243 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 23:50:11.870910 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 5 23:50:11.876605 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 5 23:50:11.876675 kernel: BTRFS info (device sda6): auto enabling async discard Sep 5 23:50:11.879451 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 23:50:11.884402 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 5 23:50:11.959214 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Sep 5 23:50:11.962899 coreos-metadata[817]: Sep 05 23:50:11.962 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Sep 5 23:50:11.965626 coreos-metadata[817]: Sep 05 23:50:11.965 INFO Fetch successful Sep 5 23:50:11.966297 coreos-metadata[817]: Sep 05 23:50:11.965 INFO wrote hostname ci-4081-3-5-n-6045d3ec0a to /sysroot/etc/hostname Sep 5 23:50:11.970335 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 5 23:50:11.971997 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Sep 5 23:50:11.977236 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Sep 5 23:50:11.982075 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Sep 5 23:50:12.088698 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 5 23:50:12.095322 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 5 23:50:12.099373 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 5 23:50:12.107146 kernel: BTRFS info (device sda6): last unmount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:50:12.135241 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 5 23:50:12.138143 ignition[932]: INFO : Ignition 2.19.0 Sep 5 23:50:12.138143 ignition[932]: INFO : Stage: mount Sep 5 23:50:12.138143 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 23:50:12.138143 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 5 23:50:12.140375 ignition[932]: INFO : mount: mount passed Sep 5 23:50:12.140375 ignition[932]: INFO : Ignition finished successfully Sep 5 23:50:12.143166 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 5 23:50:12.148321 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 5 23:50:12.233187 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 5 23:50:12.240525 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 23:50:12.256507 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (944) Sep 5 23:50:12.256570 kernel: BTRFS info (device sda6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:50:12.257487 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:50:12.257520 kernel: BTRFS info (device sda6): using free space tree Sep 5 23:50:12.261254 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 5 23:50:12.261351 kernel: BTRFS info (device sda6): auto enabling async discard Sep 5 23:50:12.264381 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 23:50:12.288303 ignition[961]: INFO : Ignition 2.19.0 Sep 5 23:50:12.288303 ignition[961]: INFO : Stage: files Sep 5 23:50:12.289521 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 23:50:12.289521 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 5 23:50:12.291266 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Sep 5 23:50:12.291266 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 5 23:50:12.291266 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 5 23:50:12.295508 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 5 23:50:12.296541 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 5 23:50:12.297882 unknown[961]: wrote ssh authorized keys file for user: core Sep 5 23:50:12.298938 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 5 23:50:12.301621 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 5 23:50:12.303034 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 5 23:50:12.764534 systemd-networkd[780]: eth0: Gained IPv6LL Sep 5 23:50:13.340422 systemd-networkd[780]: eth1: Gained IPv6LL Sep 5 23:50:13.975378 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 5 23:50:15.958272 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 5 23:50:15.959784 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 5 23:50:15.959784 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 5 23:50:16.184462 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 5 23:50:16.294011 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 5 23:50:16.294011 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 5 23:50:16.300901 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 5 23:50:16.300901 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 5 23:50:16.300901 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 5 23:50:16.300901 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 23:50:16.300901 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 23:50:16.300901 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 23:50:16.300901 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 23:50:16.300901 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 23:50:16.300901 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 23:50:16.300901 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 5 23:50:16.300901 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 5 23:50:16.300901 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 5 23:50:16.300901 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 5 23:50:16.635567 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 5 23:50:18.733573 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 5 23:50:18.735425 ignition[961]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 5 23:50:18.737146 ignition[961]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 23:50:18.740407 ignition[961]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 23:50:18.740407 ignition[961]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 5 23:50:18.740407 ignition[961]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 5 23:50:18.740407 ignition[961]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 5 23:50:18.740407 ignition[961]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 5 23:50:18.740407 ignition[961]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 5 23:50:18.740407 ignition[961]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Sep 5 23:50:18.740407 ignition[961]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Sep 5 23:50:18.740407 ignition[961]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 5 23:50:18.740407 ignition[961]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 5 23:50:18.740407 ignition[961]: INFO : files: files passed Sep 5 23:50:18.740407 ignition[961]: INFO : Ignition finished successfully Sep 5 23:50:18.743011 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 5 23:50:18.754275 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 5 23:50:18.756953 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 5 23:50:18.762229 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 5 23:50:18.779743 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 5 23:50:18.790586 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 23:50:18.790586 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 5 23:50:18.793259 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 23:50:18.796574 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 23:50:18.797603 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 5 23:50:18.804475 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 5 23:50:18.850677 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 5 23:50:18.853179 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 5 23:50:18.855363 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 5 23:50:18.855964 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 5 23:50:18.858018 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 5 23:50:18.863434 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 5 23:50:18.880159 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 23:50:18.888426 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 5 23:50:18.902436 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 5 23:50:18.903891 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 23:50:18.905670 systemd[1]: Stopped target timers.target - Timer Units. Sep 5 23:50:18.906348 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 5 23:50:18.906527 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 23:50:18.908289 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 5 23:50:18.909445 systemd[1]: Stopped target basic.target - Basic System. Sep 5 23:50:18.910339 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 5 23:50:18.911319 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 23:50:18.912314 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 5 23:50:18.913345 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 5 23:50:18.914338 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 23:50:18.916728 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 5 23:50:18.917871 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 5 23:50:18.918902 systemd[1]: Stopped target swap.target - Swaps. Sep 5 23:50:18.919703 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 5 23:50:18.919892 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 5 23:50:18.921114 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 5 23:50:18.922342 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 23:50:18.923365 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 5 23:50:18.923857 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 23:50:18.924755 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 5 23:50:18.924971 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 5 23:50:18.926523 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 5 23:50:18.926726 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 23:50:18.927789 systemd[1]: ignition-files.service: Deactivated successfully. Sep 5 23:50:18.927956 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 5 23:50:18.928748 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 5 23:50:18.928897 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 5 23:50:18.936541 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 5 23:50:18.940411 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 5 23:50:18.940917 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 5 23:50:18.941044 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 23:50:18.942040 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 5 23:50:18.944393 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 23:50:18.955263 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 5 23:50:18.957198 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 5 23:50:18.962003 ignition[1013]: INFO : Ignition 2.19.0 Sep 5 23:50:18.962003 ignition[1013]: INFO : Stage: umount Sep 5 23:50:18.963325 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 23:50:18.963325 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 5 23:50:18.965175 ignition[1013]: INFO : umount: umount passed Sep 5 23:50:18.966220 ignition[1013]: INFO : Ignition finished successfully Sep 5 23:50:18.971351 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 5 23:50:18.973611 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 5 23:50:18.974390 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 5 23:50:18.975322 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 5 23:50:18.975415 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 5 23:50:18.977342 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 5 23:50:18.977440 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 5 23:50:18.978079 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 5 23:50:18.979755 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 5 23:50:18.982401 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 5 23:50:18.982543 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 5 23:50:18.984291 systemd[1]: Stopped target network.target - Network. Sep 5 23:50:18.985309 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 5 23:50:18.985385 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 23:50:18.986337 systemd[1]: Stopped target paths.target - Path Units. Sep 5 23:50:18.987221 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 5 23:50:18.987771 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 23:50:18.988639 systemd[1]: Stopped target slices.target - Slice Units. Sep 5 23:50:18.989731 systemd[1]: Stopped target sockets.target - Socket Units. Sep 5 23:50:18.991113 systemd[1]: iscsid.socket: Deactivated successfully. Sep 5 23:50:18.991224 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 23:50:18.993586 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 5 23:50:18.993644 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 23:50:18.995423 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 5 23:50:18.995509 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 5 23:50:18.996981 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 5 23:50:18.997027 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 5 23:50:18.997976 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 5 23:50:18.998024 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 5 23:50:18.999289 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 5 23:50:19.000168 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 5 23:50:19.008597 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 5 23:50:19.009025 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 5 23:50:19.009290 systemd-networkd[780]: eth0: DHCPv6 lease lost Sep 5 23:50:19.012908 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 5 23:50:19.012997 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 23:50:19.015194 systemd-networkd[780]: eth1: DHCPv6 lease lost Sep 5 23:50:19.017233 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 5 23:50:19.017422 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 5 23:50:19.019020 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 5 23:50:19.019076 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 5 23:50:19.026419 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 5 23:50:19.026960 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 5 23:50:19.027042 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 23:50:19.029837 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 23:50:19.029904 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 23:50:19.032808 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 5 23:50:19.032892 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 5 23:50:19.034428 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 23:50:19.042839 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 5 23:50:19.044171 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 23:50:19.045488 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 5 23:50:19.045546 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 5 23:50:19.046794 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 5 23:50:19.046844 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 23:50:19.047799 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 5 23:50:19.047853 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 5 23:50:19.049420 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 5 23:50:19.049475 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 5 23:50:19.050995 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 23:50:19.051050 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:50:19.060599 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 5 23:50:19.067219 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 5 23:50:19.067319 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 23:50:19.068081 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 5 23:50:19.068282 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 23:50:19.068975 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 5 23:50:19.069027 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 23:50:19.069942 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 23:50:19.069988 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:50:19.074722 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 5 23:50:19.076355 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 5 23:50:19.077770 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 5 23:50:19.077873 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 5 23:50:19.080161 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 5 23:50:19.089329 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 5 23:50:19.097459 systemd[1]: Switching root. Sep 5 23:50:19.125830 systemd-journald[236]: Journal stopped Sep 5 23:50:20.084057 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Sep 5 23:50:20.084741 kernel: SELinux: policy capability network_peer_controls=1 Sep 5 23:50:20.084775 kernel: SELinux: policy capability open_perms=1 Sep 5 23:50:20.084786 kernel: SELinux: policy capability extended_socket_class=1 Sep 5 23:50:20.084801 kernel: SELinux: policy capability always_check_network=0 Sep 5 23:50:20.084811 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 5 23:50:20.084821 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 5 23:50:20.084831 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 5 23:50:20.084841 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 5 23:50:20.084850 kernel: audit: type=1403 audit(1757116219.325:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 5 23:50:20.084862 systemd[1]: Successfully loaded SELinux policy in 37.429ms. Sep 5 23:50:20.084885 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.458ms. Sep 5 23:50:20.084897 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 23:50:20.084909 systemd[1]: Detected virtualization kvm. Sep 5 23:50:20.084919 systemd[1]: Detected architecture arm64. Sep 5 23:50:20.084929 systemd[1]: Detected first boot. Sep 5 23:50:20.084940 systemd[1]: Hostname set to . Sep 5 23:50:20.084950 systemd[1]: Initializing machine ID from VM UUID. Sep 5 23:50:20.084963 zram_generator::config[1061]: No configuration found. Sep 5 23:50:20.084977 systemd[1]: Populated /etc with preset unit settings. Sep 5 23:50:20.084987 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 5 23:50:20.084997 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 5 23:50:20.085008 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 5 23:50:20.085019 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 5 23:50:20.085030 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 5 23:50:20.085040 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 5 23:50:20.085051 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 5 23:50:20.085063 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 5 23:50:20.085074 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 5 23:50:20.085084 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 5 23:50:20.085099 systemd[1]: Created slice user.slice - User and Session Slice. Sep 5 23:50:20.085109 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 23:50:20.085135 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 23:50:20.085147 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 5 23:50:20.085158 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 5 23:50:20.085169 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 5 23:50:20.085181 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 23:50:20.085192 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 5 23:50:20.085202 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 23:50:20.085214 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 5 23:50:20.085225 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 5 23:50:20.085235 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 5 23:50:20.085247 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 5 23:50:20.085258 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 23:50:20.085273 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 23:50:20.085288 systemd[1]: Reached target slices.target - Slice Units. Sep 5 23:50:20.085299 systemd[1]: Reached target swap.target - Swaps. Sep 5 23:50:20.085309 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 5 23:50:20.085320 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 5 23:50:20.085330 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 23:50:20.085340 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 23:50:20.085351 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 23:50:20.085364 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 5 23:50:20.085375 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 5 23:50:20.085385 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 5 23:50:20.085396 systemd[1]: Mounting media.mount - External Media Directory... Sep 5 23:50:20.085406 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 5 23:50:20.085416 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 5 23:50:20.085427 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 5 23:50:20.085437 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 5 23:50:20.085451 systemd[1]: Reached target machines.target - Containers. Sep 5 23:50:20.085518 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 5 23:50:20.085536 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 23:50:20.085547 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 23:50:20.085562 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 5 23:50:20.085573 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 23:50:20.085586 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 23:50:20.085597 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 23:50:20.085607 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 5 23:50:20.085618 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 23:50:20.085629 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 5 23:50:20.085640 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 5 23:50:20.085651 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 5 23:50:20.085662 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 5 23:50:20.085674 systemd[1]: Stopped systemd-fsck-usr.service. Sep 5 23:50:20.085684 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 23:50:20.085694 kernel: fuse: init (API version 7.39) Sep 5 23:50:20.085705 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 23:50:20.085715 kernel: ACPI: bus type drm_connector registered Sep 5 23:50:20.085726 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 23:50:20.085737 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 5 23:50:20.085750 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 23:50:20.085761 systemd[1]: verity-setup.service: Deactivated successfully. Sep 5 23:50:20.085774 systemd[1]: Stopped verity-setup.service. Sep 5 23:50:20.085822 systemd-journald[1124]: Collecting audit messages is disabled. Sep 5 23:50:20.085855 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 5 23:50:20.085867 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 5 23:50:20.085877 kernel: loop: module loaded Sep 5 23:50:20.085888 systemd-journald[1124]: Journal started Sep 5 23:50:20.085913 systemd-journald[1124]: Runtime Journal (/run/log/journal/6edfd1a5ddcc4b18be6b8bc689fe6809) is 8.0M, max 76.6M, 68.6M free. Sep 5 23:50:19.817547 systemd[1]: Queued start job for default target multi-user.target. Sep 5 23:50:19.836145 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 5 23:50:19.836968 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 5 23:50:20.088601 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 23:50:20.089925 systemd[1]: Mounted media.mount - External Media Directory. Sep 5 23:50:20.091578 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 5 23:50:20.093392 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 5 23:50:20.094214 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 5 23:50:20.096545 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 23:50:20.097641 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 5 23:50:20.097818 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 5 23:50:20.099057 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 23:50:20.099624 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 23:50:20.103931 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 23:50:20.105164 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 23:50:20.108238 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 23:50:20.108532 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 23:50:20.112710 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 5 23:50:20.112879 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 5 23:50:20.114149 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 23:50:20.114292 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 23:50:20.115243 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 23:50:20.116534 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 23:50:20.118168 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 5 23:50:20.122900 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 5 23:50:20.133088 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 23:50:20.142301 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 5 23:50:20.148303 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 5 23:50:20.148998 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 5 23:50:20.149048 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 23:50:20.154647 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 5 23:50:20.161824 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 5 23:50:20.167514 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 5 23:50:20.168283 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 23:50:20.172574 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 5 23:50:20.175355 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 5 23:50:20.176572 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 23:50:20.179344 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 5 23:50:20.182108 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 23:50:20.184244 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 23:50:20.187395 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 5 23:50:20.192397 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 23:50:20.197023 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 5 23:50:20.198017 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 5 23:50:20.201620 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 5 23:50:20.219104 systemd-journald[1124]: Time spent on flushing to /var/log/journal/6edfd1a5ddcc4b18be6b8bc689fe6809 is 57.299ms for 1128 entries. Sep 5 23:50:20.219104 systemd-journald[1124]: System Journal (/var/log/journal/6edfd1a5ddcc4b18be6b8bc689fe6809) is 8.0M, max 584.8M, 576.8M free. Sep 5 23:50:20.294068 systemd-journald[1124]: Received client request to flush runtime journal. Sep 5 23:50:20.294199 kernel: loop0: detected capacity change from 0 to 114432 Sep 5 23:50:20.294222 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 5 23:50:20.237732 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 23:50:20.249388 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 5 23:50:20.261523 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 5 23:50:20.262318 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 5 23:50:20.272175 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 5 23:50:20.277300 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 23:50:20.302029 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Sep 5 23:50:20.302051 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Sep 5 23:50:20.309536 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 5 23:50:20.320382 udevadm[1179]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 5 23:50:20.327234 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 23:50:20.329683 kernel: loop1: detected capacity change from 0 to 211168 Sep 5 23:50:20.337635 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 5 23:50:20.356979 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 5 23:50:20.359919 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 5 23:50:20.385166 kernel: loop2: detected capacity change from 0 to 114328 Sep 5 23:50:20.399002 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 5 23:50:20.408342 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 23:50:20.447414 kernel: loop3: detected capacity change from 0 to 8 Sep 5 23:50:20.460268 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Sep 5 23:50:20.461166 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Sep 5 23:50:20.468079 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 23:50:20.478212 kernel: loop4: detected capacity change from 0 to 114432 Sep 5 23:50:20.496156 kernel: loop5: detected capacity change from 0 to 211168 Sep 5 23:50:20.521181 kernel: loop6: detected capacity change from 0 to 114328 Sep 5 23:50:20.540290 kernel: loop7: detected capacity change from 0 to 8 Sep 5 23:50:20.541727 (sd-merge)[1199]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Sep 5 23:50:20.543633 (sd-merge)[1199]: Merged extensions into '/usr'. Sep 5 23:50:20.550420 systemd[1]: Reloading requested from client PID 1171 ('systemd-sysext') (unit systemd-sysext.service)... Sep 5 23:50:20.551054 systemd[1]: Reloading... Sep 5 23:50:20.672151 zram_generator::config[1228]: No configuration found. Sep 5 23:50:20.808061 ldconfig[1166]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 5 23:50:20.833156 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:50:20.880182 systemd[1]: Reloading finished in 327 ms. Sep 5 23:50:20.904173 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 5 23:50:20.905323 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 5 23:50:20.915483 systemd[1]: Starting ensure-sysext.service... Sep 5 23:50:20.920518 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 23:50:20.933147 systemd[1]: Reloading requested from client PID 1262 ('systemctl') (unit ensure-sysext.service)... Sep 5 23:50:20.933172 systemd[1]: Reloading... Sep 5 23:50:20.976740 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 5 23:50:20.977028 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 5 23:50:20.977802 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 5 23:50:20.978029 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Sep 5 23:50:20.978076 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Sep 5 23:50:20.987902 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 23:50:20.988077 systemd-tmpfiles[1263]: Skipping /boot Sep 5 23:50:20.998968 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 23:50:20.999112 systemd-tmpfiles[1263]: Skipping /boot Sep 5 23:50:21.031149 zram_generator::config[1296]: No configuration found. Sep 5 23:50:21.132813 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:50:21.179656 systemd[1]: Reloading finished in 246 ms. Sep 5 23:50:21.203586 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 5 23:50:21.209858 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 23:50:21.226494 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 23:50:21.232366 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 5 23:50:21.237745 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 5 23:50:21.243397 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 23:50:21.247438 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 23:50:21.253387 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 5 23:50:21.257763 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 23:50:21.261438 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 23:50:21.281648 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 23:50:21.287062 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 23:50:21.288978 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 23:50:21.292348 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 23:50:21.292615 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 23:50:21.307833 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 5 23:50:21.315099 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 23:50:21.324667 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 23:50:21.326418 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 23:50:21.327138 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 23:50:21.328229 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 23:50:21.329761 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 23:50:21.329919 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 23:50:21.336987 systemd-udevd[1334]: Using default interface naming scheme 'v255'. Sep 5 23:50:21.337309 systemd[1]: Finished ensure-sysext.service. Sep 5 23:50:21.345662 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 5 23:50:21.348909 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 23:50:21.358088 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 5 23:50:21.365441 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 5 23:50:21.366780 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 5 23:50:21.371507 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 23:50:21.372370 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 23:50:21.377295 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 23:50:21.382387 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 5 23:50:21.408610 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 23:50:21.433058 augenrules[1386]: No rules Sep 5 23:50:21.435691 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 23:50:21.435760 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 23:50:21.436310 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 23:50:21.437773 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 23:50:21.437938 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 23:50:21.448928 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 5 23:50:21.469197 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 5 23:50:21.503257 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 5 23:50:21.600468 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 5 23:50:21.602219 systemd[1]: Reached target time-set.target - System Time Set. Sep 5 23:50:21.604667 systemd-networkd[1375]: lo: Link UP Sep 5 23:50:21.605046 systemd-networkd[1375]: lo: Gained carrier Sep 5 23:50:21.608320 systemd-networkd[1375]: Enumeration completed Sep 5 23:50:21.608632 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 23:50:21.609587 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:50:21.609693 systemd-networkd[1375]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 23:50:21.611141 systemd-networkd[1375]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:50:21.611230 systemd-networkd[1375]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 23:50:21.612180 systemd-networkd[1375]: eth0: Link UP Sep 5 23:50:21.612188 systemd-networkd[1375]: eth0: Gained carrier Sep 5 23:50:21.612205 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:50:21.612314 systemd-timesyncd[1353]: No network connectivity, watching for changes. Sep 5 23:50:21.615190 systemd-networkd[1375]: eth1: Link UP Sep 5 23:50:21.615197 systemd-networkd[1375]: eth1: Gained carrier Sep 5 23:50:21.615217 systemd-networkd[1375]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:50:21.616350 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 5 23:50:21.623300 systemd-resolved[1332]: Positive Trust Anchors: Sep 5 23:50:21.623332 systemd-resolved[1332]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 23:50:21.623365 systemd-resolved[1332]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 23:50:21.631211 systemd-resolved[1332]: Using system hostname 'ci-4081-3-5-n-6045d3ec0a'. Sep 5 23:50:21.633191 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 23:50:21.634303 systemd[1]: Reached target network.target - Network. Sep 5 23:50:21.634988 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 23:50:21.655284 systemd-networkd[1375]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 5 23:50:21.656305 systemd-timesyncd[1353]: Network configuration changed, trying to establish connection. Sep 5 23:50:21.675355 systemd-networkd[1375]: eth0: DHCPv4 address 128.140.56.156/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 5 23:50:21.677087 systemd-timesyncd[1353]: Network configuration changed, trying to establish connection. Sep 5 23:50:21.693849 systemd-networkd[1375]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:50:21.700144 kernel: mousedev: PS/2 mouse device common for all mice Sep 5 23:50:21.740240 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1371) Sep 5 23:50:21.747700 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Sep 5 23:50:21.748044 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 23:50:21.756538 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 23:50:21.759711 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Sep 5 23:50:21.759795 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 5 23:50:21.759822 kernel: [drm] features: -context_init Sep 5 23:50:21.761207 kernel: [drm] number of scanouts: 1 Sep 5 23:50:21.761290 kernel: [drm] number of cap sets: 0 Sep 5 23:50:21.769149 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Sep 5 23:50:21.771423 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 23:50:21.776375 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 23:50:21.778251 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 23:50:21.778291 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 23:50:21.778729 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 23:50:21.778932 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 23:50:21.783518 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 23:50:21.785185 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 23:50:21.786323 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 23:50:21.789150 kernel: Console: switching to colour frame buffer device 160x50 Sep 5 23:50:21.802895 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 23:50:21.805108 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 5 23:50:21.804500 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 23:50:21.807786 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 23:50:21.858644 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:50:21.864401 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 5 23:50:21.875517 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 5 23:50:21.877876 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 23:50:21.879740 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:50:21.889534 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:50:21.893797 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 5 23:50:21.953539 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:50:22.006546 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 5 23:50:22.017492 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 5 23:50:22.033218 lvm[1442]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 23:50:22.062232 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 5 23:50:22.063380 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 23:50:22.064008 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 23:50:22.066535 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 5 23:50:22.067856 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 5 23:50:22.069294 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 5 23:50:22.070300 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 5 23:50:22.071144 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 5 23:50:22.071806 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 5 23:50:22.071852 systemd[1]: Reached target paths.target - Path Units. Sep 5 23:50:22.072393 systemd[1]: Reached target timers.target - Timer Units. Sep 5 23:50:22.075225 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 5 23:50:22.078383 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 5 23:50:22.085692 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 5 23:50:22.088525 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 5 23:50:22.089785 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 5 23:50:22.090608 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 23:50:22.091152 systemd[1]: Reached target basic.target - Basic System. Sep 5 23:50:22.091711 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 5 23:50:22.091747 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 5 23:50:22.095360 systemd[1]: Starting containerd.service - containerd container runtime... Sep 5 23:50:22.099918 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 5 23:50:22.106520 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 5 23:50:22.107427 lvm[1446]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 23:50:22.116740 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 5 23:50:22.122183 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 5 23:50:22.125253 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 5 23:50:22.132611 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 5 23:50:22.135590 jq[1450]: false Sep 5 23:50:22.137229 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 5 23:50:22.147730 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Sep 5 23:50:22.152937 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 5 23:50:22.157394 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 5 23:50:22.170814 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 5 23:50:22.171676 coreos-metadata[1448]: Sep 05 23:50:22.171 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Sep 5 23:50:22.172402 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 5 23:50:22.173072 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 5 23:50:22.175230 coreos-metadata[1448]: Sep 05 23:50:22.174 INFO Fetch successful Sep 5 23:50:22.176306 coreos-metadata[1448]: Sep 05 23:50:22.175 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Sep 5 23:50:22.176745 coreos-metadata[1448]: Sep 05 23:50:22.176 INFO Fetch successful Sep 5 23:50:22.180557 systemd[1]: Starting update-engine.service - Update Engine... Sep 5 23:50:22.183909 extend-filesystems[1451]: Found loop4 Sep 5 23:50:22.185277 extend-filesystems[1451]: Found loop5 Sep 5 23:50:22.185277 extend-filesystems[1451]: Found loop6 Sep 5 23:50:22.185277 extend-filesystems[1451]: Found loop7 Sep 5 23:50:22.185277 extend-filesystems[1451]: Found sda Sep 5 23:50:22.185277 extend-filesystems[1451]: Found sda1 Sep 5 23:50:22.185277 extend-filesystems[1451]: Found sda2 Sep 5 23:50:22.185277 extend-filesystems[1451]: Found sda3 Sep 5 23:50:22.185277 extend-filesystems[1451]: Found usr Sep 5 23:50:22.195618 extend-filesystems[1451]: Found sda4 Sep 5 23:50:22.195618 extend-filesystems[1451]: Found sda6 Sep 5 23:50:22.195618 extend-filesystems[1451]: Found sda7 Sep 5 23:50:22.195618 extend-filesystems[1451]: Found sda9 Sep 5 23:50:22.195618 extend-filesystems[1451]: Checking size of /dev/sda9 Sep 5 23:50:22.187159 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 5 23:50:22.186617 dbus-daemon[1449]: [system] SELinux support is enabled Sep 5 23:50:22.188341 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 5 23:50:22.194213 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 5 23:50:22.203628 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 5 23:50:22.203806 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 5 23:50:22.208159 extend-filesystems[1451]: Resized partition /dev/sda9 Sep 5 23:50:22.220378 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 5 23:50:22.220482 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 5 23:50:22.222638 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 5 23:50:22.222672 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 5 23:50:22.241377 extend-filesystems[1475]: resize2fs 1.47.1 (20-May-2024) Sep 5 23:50:22.265594 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Sep 5 23:50:22.257358 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 5 23:50:22.259268 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 5 23:50:22.270995 jq[1463]: true Sep 5 23:50:22.282965 (ntainerd)[1484]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 5 23:50:22.287707 systemd[1]: motdgen.service: Deactivated successfully. Sep 5 23:50:22.287962 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 5 23:50:22.301247 tar[1482]: linux-arm64/LICENSE Sep 5 23:50:22.301885 tar[1482]: linux-arm64/helm Sep 5 23:50:22.335192 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1372) Sep 5 23:50:22.364667 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 5 23:50:22.375065 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 5 23:50:22.387761 update_engine[1461]: I20250905 23:50:22.386320 1461 main.cc:92] Flatcar Update Engine starting Sep 5 23:50:22.389075 jq[1494]: true Sep 5 23:50:22.401503 systemd[1]: Started update-engine.service - Update Engine. Sep 5 23:50:22.406134 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 5 23:50:22.407646 update_engine[1461]: I20250905 23:50:22.407581 1461 update_check_scheduler.cc:74] Next update check in 2m44s Sep 5 23:50:22.502397 systemd-logind[1460]: New seat seat0. Sep 5 23:50:22.515598 systemd-logind[1460]: Watching system buttons on /dev/input/event0 (Power Button) Sep 5 23:50:22.515851 systemd-logind[1460]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Sep 5 23:50:22.517837 systemd[1]: Started systemd-logind.service - User Login Management. Sep 5 23:50:22.531762 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Sep 5 23:50:22.534160 extend-filesystems[1475]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Sep 5 23:50:22.534160 extend-filesystems[1475]: old_desc_blocks = 1, new_desc_blocks = 5 Sep 5 23:50:22.534160 extend-filesystems[1475]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Sep 5 23:50:22.543689 extend-filesystems[1451]: Resized filesystem in /dev/sda9 Sep 5 23:50:22.543689 extend-filesystems[1451]: Found sr0 Sep 5 23:50:22.549174 bash[1523]: Updated "/home/core/.ssh/authorized_keys" Sep 5 23:50:22.538543 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 5 23:50:22.538748 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 5 23:50:22.543542 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 5 23:50:22.557730 systemd[1]: Starting sshkeys.service... Sep 5 23:50:22.578565 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 5 23:50:22.588589 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 5 23:50:22.632256 coreos-metadata[1528]: Sep 05 23:50:22.632 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Sep 5 23:50:22.633969 coreos-metadata[1528]: Sep 05 23:50:22.633 INFO Fetch successful Sep 5 23:50:22.639234 unknown[1528]: wrote ssh authorized keys file for user: core Sep 5 23:50:22.670062 containerd[1484]: time="2025-09-05T23:50:22.669919480Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 5 23:50:22.679682 update-ssh-keys[1533]: Updated "/home/core/.ssh/authorized_keys" Sep 5 23:50:22.682791 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 5 23:50:22.686694 systemd[1]: Finished sshkeys.service. Sep 5 23:50:22.740132 containerd[1484]: time="2025-09-05T23:50:22.738615680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:50:22.745058 containerd[1484]: time="2025-09-05T23:50:22.742559200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.103-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:50:22.745058 containerd[1484]: time="2025-09-05T23:50:22.742611240Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 5 23:50:22.745058 containerd[1484]: time="2025-09-05T23:50:22.742631680Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 5 23:50:22.745058 containerd[1484]: time="2025-09-05T23:50:22.742866840Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 5 23:50:22.745058 containerd[1484]: time="2025-09-05T23:50:22.742887400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 5 23:50:22.745058 containerd[1484]: time="2025-09-05T23:50:22.743016120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:50:22.745058 containerd[1484]: time="2025-09-05T23:50:22.743032680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:50:22.745058 containerd[1484]: time="2025-09-05T23:50:22.743290800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:50:22.745058 containerd[1484]: time="2025-09-05T23:50:22.743308640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 5 23:50:22.745058 containerd[1484]: time="2025-09-05T23:50:22.743323840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:50:22.745058 containerd[1484]: time="2025-09-05T23:50:22.743334240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 5 23:50:22.745638 containerd[1484]: time="2025-09-05T23:50:22.743525560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:50:22.745638 containerd[1484]: time="2025-09-05T23:50:22.743825800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:50:22.745638 containerd[1484]: time="2025-09-05T23:50:22.744018120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:50:22.745638 containerd[1484]: time="2025-09-05T23:50:22.744037440Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 5 23:50:22.745638 containerd[1484]: time="2025-09-05T23:50:22.744151920Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 5 23:50:22.745638 containerd[1484]: time="2025-09-05T23:50:22.744202960Z" level=info msg="metadata content store policy set" policy=shared Sep 5 23:50:22.747545 locksmithd[1509]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 5 23:50:22.749372 containerd[1484]: time="2025-09-05T23:50:22.748644160Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 5 23:50:22.749466 containerd[1484]: time="2025-09-05T23:50:22.749413800Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 5 23:50:22.749526 containerd[1484]: time="2025-09-05T23:50:22.749452880Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 5 23:50:22.749550 containerd[1484]: time="2025-09-05T23:50:22.749534400Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 5 23:50:22.749587 containerd[1484]: time="2025-09-05T23:50:22.749557680Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 5 23:50:22.749782 containerd[1484]: time="2025-09-05T23:50:22.749755560Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 5 23:50:22.750084 containerd[1484]: time="2025-09-05T23:50:22.750053240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 5 23:50:22.750221 containerd[1484]: time="2025-09-05T23:50:22.750199120Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 5 23:50:22.750248 containerd[1484]: time="2025-09-05T23:50:22.750227040Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 5 23:50:22.750267 containerd[1484]: time="2025-09-05T23:50:22.750245400Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 5 23:50:22.750267 containerd[1484]: time="2025-09-05T23:50:22.750261800Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 5 23:50:22.750323 containerd[1484]: time="2025-09-05T23:50:22.750277240Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 5 23:50:22.750323 containerd[1484]: time="2025-09-05T23:50:22.750293800Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 5 23:50:22.750360 containerd[1484]: time="2025-09-05T23:50:22.750316600Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 5 23:50:22.750360 containerd[1484]: time="2025-09-05T23:50:22.750338160Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 5 23:50:22.750360 containerd[1484]: time="2025-09-05T23:50:22.750354440Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 5 23:50:22.750408 containerd[1484]: time="2025-09-05T23:50:22.750371480Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 5 23:50:22.750408 containerd[1484]: time="2025-09-05T23:50:22.750386960Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 5 23:50:22.750499 containerd[1484]: time="2025-09-05T23:50:22.750411920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 5 23:50:22.750499 containerd[1484]: time="2025-09-05T23:50:22.750442720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 5 23:50:22.750499 containerd[1484]: time="2025-09-05T23:50:22.750460520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 5 23:50:22.750499 containerd[1484]: time="2025-09-05T23:50:22.750483000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 5 23:50:22.750499 containerd[1484]: time="2025-09-05T23:50:22.750498080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 5 23:50:22.750595 containerd[1484]: time="2025-09-05T23:50:22.750514000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 5 23:50:22.750595 containerd[1484]: time="2025-09-05T23:50:22.750529360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 5 23:50:22.750595 containerd[1484]: time="2025-09-05T23:50:22.750545040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 5 23:50:22.750595 containerd[1484]: time="2025-09-05T23:50:22.750561960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 5 23:50:22.750595 containerd[1484]: time="2025-09-05T23:50:22.750580280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 5 23:50:22.750723 containerd[1484]: time="2025-09-05T23:50:22.750596880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 5 23:50:22.750723 containerd[1484]: time="2025-09-05T23:50:22.750617320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 5 23:50:22.750723 containerd[1484]: time="2025-09-05T23:50:22.750632400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 5 23:50:22.750723 containerd[1484]: time="2025-09-05T23:50:22.750652720Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 5 23:50:22.750723 containerd[1484]: time="2025-09-05T23:50:22.750682560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 5 23:50:22.750723 containerd[1484]: time="2025-09-05T23:50:22.750697880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 5 23:50:22.750723 containerd[1484]: time="2025-09-05T23:50:22.750712240Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 5 23:50:22.750846 containerd[1484]: time="2025-09-05T23:50:22.750826720Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 5 23:50:22.750869 containerd[1484]: time="2025-09-05T23:50:22.750848480Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 5 23:50:22.750869 containerd[1484]: time="2025-09-05T23:50:22.750863000Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 5 23:50:22.750911 containerd[1484]: time="2025-09-05T23:50:22.750880280Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 5 23:50:22.750911 containerd[1484]: time="2025-09-05T23:50:22.750893480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 5 23:50:22.750946 containerd[1484]: time="2025-09-05T23:50:22.750911720Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 5 23:50:22.750946 containerd[1484]: time="2025-09-05T23:50:22.750928600Z" level=info msg="NRI interface is disabled by configuration." Sep 5 23:50:22.750980 containerd[1484]: time="2025-09-05T23:50:22.750946040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 5 23:50:22.752468 containerd[1484]: time="2025-09-05T23:50:22.751382440Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 5 23:50:22.752468 containerd[1484]: time="2025-09-05T23:50:22.751660120Z" level=info msg="Connect containerd service" Sep 5 23:50:22.752468 containerd[1484]: time="2025-09-05T23:50:22.751782480Z" level=info msg="using legacy CRI server" Sep 5 23:50:22.752468 containerd[1484]: time="2025-09-05T23:50:22.751792800Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 5 23:50:22.752468 containerd[1484]: time="2025-09-05T23:50:22.751891680Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 5 23:50:22.753369 containerd[1484]: time="2025-09-05T23:50:22.752834120Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 23:50:22.753369 containerd[1484]: time="2025-09-05T23:50:22.753358920Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 5 23:50:22.753506 containerd[1484]: time="2025-09-05T23:50:22.753400880Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 5 23:50:22.753528 containerd[1484]: time="2025-09-05T23:50:22.753505080Z" level=info msg="Start subscribing containerd event" Sep 5 23:50:22.753547 containerd[1484]: time="2025-09-05T23:50:22.753540600Z" level=info msg="Start recovering state" Sep 5 23:50:22.756593 containerd[1484]: time="2025-09-05T23:50:22.753603320Z" level=info msg="Start event monitor" Sep 5 23:50:22.756593 containerd[1484]: time="2025-09-05T23:50:22.753623400Z" level=info msg="Start snapshots syncer" Sep 5 23:50:22.756593 containerd[1484]: time="2025-09-05T23:50:22.753634000Z" level=info msg="Start cni network conf syncer for default" Sep 5 23:50:22.756593 containerd[1484]: time="2025-09-05T23:50:22.753641640Z" level=info msg="Start streaming server" Sep 5 23:50:22.756593 containerd[1484]: time="2025-09-05T23:50:22.754956440Z" level=info msg="containerd successfully booted in 0.091712s" Sep 5 23:50:22.753919 systemd[1]: Started containerd.service - containerd container runtime. Sep 5 23:50:22.941322 systemd-networkd[1375]: eth1: Gained IPv6LL Sep 5 23:50:22.942238 systemd-timesyncd[1353]: Network configuration changed, trying to establish connection. Sep 5 23:50:22.947191 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 5 23:50:22.948600 systemd[1]: Reached target network-online.target - Network is Online. Sep 5 23:50:22.957397 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:50:22.969598 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 5 23:50:23.025457 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 5 23:50:23.038315 tar[1482]: linux-arm64/README.md Sep 5 23:50:23.058563 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 5 23:50:23.175625 sshd_keygen[1500]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 5 23:50:23.200818 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 5 23:50:23.219710 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 5 23:50:23.230963 systemd[1]: issuegen.service: Deactivated successfully. Sep 5 23:50:23.231203 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 5 23:50:23.241045 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 5 23:50:23.254406 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 5 23:50:23.262526 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 5 23:50:23.270742 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 5 23:50:23.272223 systemd[1]: Reached target getty.target - Login Prompts. Sep 5 23:50:23.452975 systemd-networkd[1375]: eth0: Gained IPv6LL Sep 5 23:50:23.453813 systemd-timesyncd[1353]: Network configuration changed, trying to establish connection. Sep 5 23:50:23.901521 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:50:23.902012 (kubelet)[1579]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:50:23.904729 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 5 23:50:23.906627 systemd[1]: Startup finished in 848ms (kernel) + 10.633s (initrd) + 4.619s (userspace) = 16.101s. Sep 5 23:50:24.518060 kubelet[1579]: E0905 23:50:24.517987 1579 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:50:24.522016 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:50:24.522219 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:50:34.773385 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 5 23:50:34.779457 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:50:34.930045 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:50:34.941764 (kubelet)[1598]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:50:34.995872 kubelet[1598]: E0905 23:50:34.995803 1598 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:50:35.000430 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:50:35.000785 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:50:42.191240 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 5 23:50:42.205963 systemd[1]: Started sshd@0-128.140.56.156:22-103.99.206.83:48846.service - OpenSSH per-connection server daemon (103.99.206.83:48846). Sep 5 23:50:42.622020 sshd[1606]: Connection closed by 103.99.206.83 port 48846 [preauth] Sep 5 23:50:42.624909 systemd[1]: sshd@0-128.140.56.156:22-103.99.206.83:48846.service: Deactivated successfully. Sep 5 23:50:45.251602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 5 23:50:45.268780 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:50:45.412457 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:50:45.418007 (kubelet)[1618]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:50:45.461398 kubelet[1618]: E0905 23:50:45.461315 1618 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:50:45.465572 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:50:45.465756 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:50:48.345654 systemd[1]: Started sshd@1-128.140.56.156:22-139.178.68.195:47780.service - OpenSSH per-connection server daemon (139.178.68.195:47780). Sep 5 23:50:49.343193 sshd[1626]: Accepted publickey for core from 139.178.68.195 port 47780 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:50:49.346251 sshd[1626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:50:49.359772 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 5 23:50:49.369766 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 5 23:50:49.376741 systemd-logind[1460]: New session 1 of user core. Sep 5 23:50:49.389741 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 5 23:50:49.403712 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 5 23:50:49.407612 (systemd)[1630]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 5 23:50:49.529305 systemd[1630]: Queued start job for default target default.target. Sep 5 23:50:49.546112 systemd[1630]: Created slice app.slice - User Application Slice. Sep 5 23:50:49.546188 systemd[1630]: Reached target paths.target - Paths. Sep 5 23:50:49.546213 systemd[1630]: Reached target timers.target - Timers. Sep 5 23:50:49.548502 systemd[1630]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 5 23:50:49.563841 systemd[1630]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 5 23:50:49.564046 systemd[1630]: Reached target sockets.target - Sockets. Sep 5 23:50:49.564098 systemd[1630]: Reached target basic.target - Basic System. Sep 5 23:50:49.564583 systemd[1630]: Reached target default.target - Main User Target. Sep 5 23:50:49.564714 systemd[1630]: Startup finished in 149ms. Sep 5 23:50:49.565309 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 5 23:50:49.577415 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 5 23:50:50.288885 systemd[1]: Started sshd@2-128.140.56.156:22-139.178.68.195:49692.service - OpenSSH per-connection server daemon (139.178.68.195:49692). Sep 5 23:50:51.281023 sshd[1641]: Accepted publickey for core from 139.178.68.195 port 49692 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:50:51.282931 sshd[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:50:51.289670 systemd-logind[1460]: New session 2 of user core. Sep 5 23:50:51.298473 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 5 23:50:51.973969 sshd[1641]: pam_unix(sshd:session): session closed for user core Sep 5 23:50:51.979614 systemd-logind[1460]: Session 2 logged out. Waiting for processes to exit. Sep 5 23:50:51.980382 systemd[1]: sshd@2-128.140.56.156:22-139.178.68.195:49692.service: Deactivated successfully. Sep 5 23:50:51.982082 systemd[1]: session-2.scope: Deactivated successfully. Sep 5 23:50:51.987778 systemd-logind[1460]: Removed session 2. Sep 5 23:50:52.171893 systemd[1]: Started sshd@3-128.140.56.156:22-139.178.68.195:49694.service - OpenSSH per-connection server daemon (139.178.68.195:49694). Sep 5 23:50:53.229033 sshd[1648]: Accepted publickey for core from 139.178.68.195 port 49694 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:50:53.232232 sshd[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:50:53.238710 systemd-logind[1460]: New session 3 of user core. Sep 5 23:50:53.247542 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 5 23:50:53.919920 systemd-timesyncd[1353]: Contacted time server 213.206.164.3:123 (2.flatcar.pool.ntp.org). Sep 5 23:50:53.920037 systemd-timesyncd[1353]: Initial clock synchronization to Fri 2025-09-05 23:50:53.770879 UTC. Sep 5 23:50:53.952763 sshd[1648]: pam_unix(sshd:session): session closed for user core Sep 5 23:50:53.957289 systemd[1]: sshd@3-128.140.56.156:22-139.178.68.195:49694.service: Deactivated successfully. Sep 5 23:50:53.959495 systemd[1]: session-3.scope: Deactivated successfully. Sep 5 23:50:53.961742 systemd-logind[1460]: Session 3 logged out. Waiting for processes to exit. Sep 5 23:50:53.962936 systemd-logind[1460]: Removed session 3. Sep 5 23:50:54.131491 systemd[1]: Started sshd@4-128.140.56.156:22-139.178.68.195:49710.service - OpenSSH per-connection server daemon (139.178.68.195:49710). Sep 5 23:50:55.105259 sshd[1655]: Accepted publickey for core from 139.178.68.195 port 49710 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:50:55.107608 sshd[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:50:55.113072 systemd-logind[1460]: New session 4 of user core. Sep 5 23:50:55.119493 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 5 23:50:55.625501 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 5 23:50:55.631432 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:50:55.761824 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:50:55.770474 (kubelet)[1667]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:50:55.787500 sshd[1655]: pam_unix(sshd:session): session closed for user core Sep 5 23:50:55.793467 systemd[1]: sshd@4-128.140.56.156:22-139.178.68.195:49710.service: Deactivated successfully. Sep 5 23:50:55.796160 systemd[1]: session-4.scope: Deactivated successfully. Sep 5 23:50:55.796959 systemd-logind[1460]: Session 4 logged out. Waiting for processes to exit. Sep 5 23:50:55.798226 systemd-logind[1460]: Removed session 4. Sep 5 23:50:55.814556 kubelet[1667]: E0905 23:50:55.814509 1667 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:50:55.817699 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:50:55.817892 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:50:55.961309 systemd[1]: Started sshd@5-128.140.56.156:22-139.178.68.195:49720.service - OpenSSH per-connection server daemon (139.178.68.195:49720). Sep 5 23:50:56.938987 sshd[1677]: Accepted publickey for core from 139.178.68.195 port 49720 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:50:56.939991 sshd[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:50:56.944785 systemd-logind[1460]: New session 5 of user core. Sep 5 23:50:56.952464 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 5 23:50:57.473193 sudo[1680]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 5 23:50:57.473493 sudo[1680]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 23:50:57.494227 sudo[1680]: pam_unix(sudo:session): session closed for user root Sep 5 23:50:57.654813 sshd[1677]: pam_unix(sshd:session): session closed for user core Sep 5 23:50:57.660086 systemd[1]: sshd@5-128.140.56.156:22-139.178.68.195:49720.service: Deactivated successfully. Sep 5 23:50:57.661964 systemd[1]: session-5.scope: Deactivated successfully. Sep 5 23:50:57.663769 systemd-logind[1460]: Session 5 logged out. Waiting for processes to exit. Sep 5 23:50:57.665360 systemd-logind[1460]: Removed session 5. Sep 5 23:50:57.833032 systemd[1]: Started sshd@6-128.140.56.156:22-139.178.68.195:49724.service - OpenSSH per-connection server daemon (139.178.68.195:49724). Sep 5 23:50:58.817205 sshd[1685]: Accepted publickey for core from 139.178.68.195 port 49724 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:50:58.818738 sshd[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:50:58.826097 systemd-logind[1460]: New session 6 of user core. Sep 5 23:50:58.832423 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 5 23:50:59.198604 systemd[1]: Started sshd@7-128.140.56.156:22-222.79.105.211:47466.service - OpenSSH per-connection server daemon (222.79.105.211:47466). Sep 5 23:50:59.341802 sudo[1691]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 5 23:50:59.342184 sudo[1691]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 23:50:59.346943 sudo[1691]: pam_unix(sudo:session): session closed for user root Sep 5 23:50:59.352787 sudo[1690]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 5 23:50:59.353087 sudo[1690]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 23:50:59.368546 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 5 23:50:59.380468 auditctl[1694]: No rules Sep 5 23:50:59.381056 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 23:50:59.381288 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 5 23:50:59.388934 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 23:50:59.418008 augenrules[1712]: No rules Sep 5 23:50:59.419102 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 23:50:59.421507 sudo[1690]: pam_unix(sudo:session): session closed for user root Sep 5 23:50:59.582561 sshd[1685]: pam_unix(sshd:session): session closed for user core Sep 5 23:50:59.587405 systemd[1]: sshd@6-128.140.56.156:22-139.178.68.195:49724.service: Deactivated successfully. Sep 5 23:50:59.589066 systemd[1]: session-6.scope: Deactivated successfully. Sep 5 23:50:59.591155 systemd-logind[1460]: Session 6 logged out. Waiting for processes to exit. Sep 5 23:50:59.592473 systemd-logind[1460]: Removed session 6. Sep 5 23:50:59.758573 systemd[1]: Started sshd@8-128.140.56.156:22-139.178.68.195:49738.service - OpenSSH per-connection server daemon (139.178.68.195:49738). Sep 5 23:51:00.742164 sshd[1720]: Accepted publickey for core from 139.178.68.195 port 49738 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:51:00.745266 sshd[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:51:00.749774 systemd-logind[1460]: New session 7 of user core. Sep 5 23:51:00.757465 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 5 23:51:01.267882 sudo[1723]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 5 23:51:01.268202 sudo[1723]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 23:51:01.582708 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 5 23:51:01.583056 (dockerd)[1738]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 5 23:51:01.842816 dockerd[1738]: time="2025-09-05T23:51:01.841849422Z" level=info msg="Starting up" Sep 5 23:51:01.939633 dockerd[1738]: time="2025-09-05T23:51:01.939587313Z" level=info msg="Loading containers: start." Sep 5 23:51:02.060190 kernel: Initializing XFRM netlink socket Sep 5 23:51:02.147233 systemd-networkd[1375]: docker0: Link UP Sep 5 23:51:02.164723 dockerd[1738]: time="2025-09-05T23:51:02.164664148Z" level=info msg="Loading containers: done." Sep 5 23:51:02.186960 dockerd[1738]: time="2025-09-05T23:51:02.186688107Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 5 23:51:02.186960 dockerd[1738]: time="2025-09-05T23:51:02.186885597Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 5 23:51:02.187276 dockerd[1738]: time="2025-09-05T23:51:02.187098310Z" level=info msg="Daemon has completed initialization" Sep 5 23:51:02.232759 dockerd[1738]: time="2025-09-05T23:51:02.232410619Z" level=info msg="API listen on /run/docker.sock" Sep 5 23:51:02.233401 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 5 23:51:02.914204 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2715943628-merged.mount: Deactivated successfully. Sep 5 23:51:03.325510 containerd[1484]: time="2025-09-05T23:51:03.325239217Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 5 23:51:04.028178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2748499937.mount: Deactivated successfully. Sep 5 23:51:05.969158 containerd[1484]: time="2025-09-05T23:51:05.967315332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:05.969714 containerd[1484]: time="2025-09-05T23:51:05.969341744Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=27352705" Sep 5 23:51:05.970617 containerd[1484]: time="2025-09-05T23:51:05.970565924Z" level=info msg="ImageCreate event name:\"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:05.976773 containerd[1484]: time="2025-09-05T23:51:05.976711677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:05.978453 containerd[1484]: time="2025-09-05T23:51:05.978398270Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"27349413\" in 2.653105616s" Sep 5 23:51:05.978453 containerd[1484]: time="2025-09-05T23:51:05.978446701Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\"" Sep 5 23:51:05.981101 containerd[1484]: time="2025-09-05T23:51:05.981035896Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 5 23:51:06.068240 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 5 23:51:06.078549 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:51:06.213048 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:51:06.225164 (kubelet)[1938]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:51:06.267071 kubelet[1938]: E0905 23:51:06.267024 1938 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:51:06.271410 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:51:06.271558 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:51:07.640088 containerd[1484]: time="2025-09-05T23:51:07.639984495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:07.641667 containerd[1484]: time="2025-09-05T23:51:07.641632718Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=23536997" Sep 5 23:51:07.642592 containerd[1484]: time="2025-09-05T23:51:07.642010995Z" level=info msg="ImageCreate event name:\"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:07.646239 containerd[1484]: time="2025-09-05T23:51:07.646194805Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:07.647419 containerd[1484]: time="2025-09-05T23:51:07.647375923Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"25093155\" in 1.666270396s" Sep 5 23:51:07.647419 containerd[1484]: time="2025-09-05T23:51:07.647417945Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\"" Sep 5 23:51:07.647882 containerd[1484]: time="2025-09-05T23:51:07.647807425Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 5 23:51:07.806261 update_engine[1461]: I20250905 23:51:07.806185 1461 update_attempter.cc:509] Updating boot flags... Sep 5 23:51:07.864663 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1958) Sep 5 23:51:07.928556 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1960) Sep 5 23:51:08.002820 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1960) Sep 5 23:51:09.089162 containerd[1484]: time="2025-09-05T23:51:09.087457643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:09.097827 containerd[1484]: time="2025-09-05T23:51:09.097761855Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=18292034" Sep 5 23:51:09.116038 containerd[1484]: time="2025-09-05T23:51:09.115955511Z" level=info msg="ImageCreate event name:\"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:09.150724 containerd[1484]: time="2025-09-05T23:51:09.150653909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:09.153361 containerd[1484]: time="2025-09-05T23:51:09.153279228Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"19848210\" in 1.505427973s" Sep 5 23:51:09.153361 containerd[1484]: time="2025-09-05T23:51:09.153346180Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\"" Sep 5 23:51:09.153897 containerd[1484]: time="2025-09-05T23:51:09.153840657Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 5 23:51:10.440225 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1335035480.mount: Deactivated successfully. Sep 5 23:51:10.772978 containerd[1484]: time="2025-09-05T23:51:10.772911290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:10.774093 containerd[1484]: time="2025-09-05T23:51:10.774032984Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=28199985" Sep 5 23:51:10.775583 containerd[1484]: time="2025-09-05T23:51:10.775511651Z" level=info msg="ImageCreate event name:\"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:10.779211 containerd[1484]: time="2025-09-05T23:51:10.779161145Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:10.780158 containerd[1484]: time="2025-09-05T23:51:10.780096448Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"28198978\" in 1.626212257s" Sep 5 23:51:10.780268 containerd[1484]: time="2025-09-05T23:51:10.780249751Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\"" Sep 5 23:51:10.781041 containerd[1484]: time="2025-09-05T23:51:10.781013192Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 5 23:51:11.420790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2160303080.mount: Deactivated successfully. Sep 5 23:51:12.450993 containerd[1484]: time="2025-09-05T23:51:12.449219715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:12.451751 containerd[1484]: time="2025-09-05T23:51:12.451705332Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152209" Sep 5 23:51:12.454280 containerd[1484]: time="2025-09-05T23:51:12.454222935Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:12.461366 containerd[1484]: time="2025-09-05T23:51:12.461291039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:12.462894 containerd[1484]: time="2025-09-05T23:51:12.462835759Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.68165477s" Sep 5 23:51:12.462894 containerd[1484]: time="2025-09-05T23:51:12.462887113Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 5 23:51:12.463652 containerd[1484]: time="2025-09-05T23:51:12.463625710Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 5 23:51:13.051915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3103137742.mount: Deactivated successfully. Sep 5 23:51:13.064139 containerd[1484]: time="2025-09-05T23:51:13.064064210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:13.065624 containerd[1484]: time="2025-09-05T23:51:13.065581456Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Sep 5 23:51:13.066363 containerd[1484]: time="2025-09-05T23:51:13.065938930Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:13.069142 containerd[1484]: time="2025-09-05T23:51:13.069088333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:13.070018 containerd[1484]: time="2025-09-05T23:51:13.069974788Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 605.446755ms" Sep 5 23:51:13.070018 containerd[1484]: time="2025-09-05T23:51:13.070009417Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 5 23:51:13.072157 containerd[1484]: time="2025-09-05T23:51:13.071093980Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 5 23:51:13.648890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2212174335.mount: Deactivated successfully. Sep 5 23:51:16.195164 containerd[1484]: time="2025-09-05T23:51:16.195083101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:16.197999 containerd[1484]: time="2025-09-05T23:51:16.197952392Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465339" Sep 5 23:51:16.200215 containerd[1484]: time="2025-09-05T23:51:16.199872939Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:16.204157 containerd[1484]: time="2025-09-05T23:51:16.203832915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:16.205896 containerd[1484]: time="2025-09-05T23:51:16.205853123Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 3.134702821s" Sep 5 23:51:16.205896 containerd[1484]: time="2025-09-05T23:51:16.205889607Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 5 23:51:16.497572 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 5 23:51:16.515505 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:51:16.683743 (kubelet)[2107]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:51:16.684489 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:51:16.730002 kubelet[2107]: E0905 23:51:16.729943 2107 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:51:16.732204 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:51:16.732464 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:51:22.380807 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:51:22.390705 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:51:22.427846 systemd[1]: Reloading requested from client PID 2137 ('systemctl') (unit session-7.scope)... Sep 5 23:51:22.427962 systemd[1]: Reloading... Sep 5 23:51:22.575163 zram_generator::config[2184]: No configuration found. Sep 5 23:51:22.657077 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:51:22.728040 systemd[1]: Reloading finished in 299 ms. Sep 5 23:51:22.783592 (kubelet)[2219]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 23:51:22.785580 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:51:22.786364 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 23:51:22.788175 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:51:22.793538 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:51:22.941383 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:51:22.951133 (kubelet)[2230]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 23:51:22.999148 kubelet[2230]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:51:22.999148 kubelet[2230]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 5 23:51:22.999148 kubelet[2230]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:51:22.999148 kubelet[2230]: I0905 23:51:22.998938 2230 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 23:51:23.542336 kubelet[2230]: I0905 23:51:23.542280 2230 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 5 23:51:23.542336 kubelet[2230]: I0905 23:51:23.542320 2230 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 23:51:23.542629 kubelet[2230]: I0905 23:51:23.542600 2230 server.go:956] "Client rotation is on, will bootstrap in background" Sep 5 23:51:23.566317 kubelet[2230]: E0905 23:51:23.566261 2230 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://128.140.56.156:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 128.140.56.156:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 5 23:51:23.567882 kubelet[2230]: I0905 23:51:23.567835 2230 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 23:51:23.584161 kubelet[2230]: E0905 23:51:23.582371 2230 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 23:51:23.584161 kubelet[2230]: I0905 23:51:23.582425 2230 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 23:51:23.586060 kubelet[2230]: I0905 23:51:23.586010 2230 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 23:51:23.587687 kubelet[2230]: I0905 23:51:23.587641 2230 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 23:51:23.587979 kubelet[2230]: I0905 23:51:23.587806 2230 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-n-6045d3ec0a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 23:51:23.588204 kubelet[2230]: I0905 23:51:23.588188 2230 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 23:51:23.588270 kubelet[2230]: I0905 23:51:23.588261 2230 container_manager_linux.go:303] "Creating device plugin manager" Sep 5 23:51:23.588597 kubelet[2230]: I0905 23:51:23.588578 2230 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:51:23.592327 kubelet[2230]: I0905 23:51:23.592298 2230 kubelet.go:480] "Attempting to sync node with API server" Sep 5 23:51:23.592666 kubelet[2230]: I0905 23:51:23.592650 2230 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 23:51:23.592755 kubelet[2230]: I0905 23:51:23.592745 2230 kubelet.go:386] "Adding apiserver pod source" Sep 5 23:51:23.595021 kubelet[2230]: I0905 23:51:23.594930 2230 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 23:51:23.596959 kubelet[2230]: E0905 23:51:23.596891 2230 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://128.140.56.156:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-6045d3ec0a&limit=500&resourceVersion=0\": dial tcp 128.140.56.156:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 5 23:51:23.597562 kubelet[2230]: I0905 23:51:23.597523 2230 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 5 23:51:23.599153 kubelet[2230]: I0905 23:51:23.599092 2230 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 5 23:51:23.599334 kubelet[2230]: W0905 23:51:23.599308 2230 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 5 23:51:23.600623 kubelet[2230]: E0905 23:51:23.600563 2230 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://128.140.56.156:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 128.140.56.156:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 5 23:51:23.605282 kubelet[2230]: I0905 23:51:23.605249 2230 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 5 23:51:23.605409 kubelet[2230]: I0905 23:51:23.605305 2230 server.go:1289] "Started kubelet" Sep 5 23:51:23.607158 kubelet[2230]: I0905 23:51:23.606226 2230 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 23:51:23.607406 kubelet[2230]: I0905 23:51:23.607343 2230 server.go:317] "Adding debug handlers to kubelet server" Sep 5 23:51:23.608402 kubelet[2230]: I0905 23:51:23.608304 2230 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 23:51:23.609013 kubelet[2230]: I0905 23:51:23.608729 2230 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 23:51:23.611242 kubelet[2230]: E0905 23:51:23.608890 2230 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://128.140.56.156:6443/api/v1/namespaces/default/events\": dial tcp 128.140.56.156:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-5-n-6045d3ec0a.186287f34958cc60 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-5-n-6045d3ec0a,UID:ci-4081-3-5-n-6045d3ec0a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-n-6045d3ec0a,},FirstTimestamp:2025-09-05 23:51:23.605273696 +0000 UTC m=+0.648173080,LastTimestamp:2025-09-05 23:51:23.605273696 +0000 UTC m=+0.648173080,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-n-6045d3ec0a,}" Sep 5 23:51:23.612227 kubelet[2230]: I0905 23:51:23.612201 2230 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 23:51:23.613638 kubelet[2230]: I0905 23:51:23.613613 2230 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 23:51:23.617262 kubelet[2230]: E0905 23:51:23.617236 2230 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 23:51:23.617875 kubelet[2230]: E0905 23:51:23.617856 2230 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-6045d3ec0a\" not found" Sep 5 23:51:23.617984 kubelet[2230]: I0905 23:51:23.617972 2230 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 5 23:51:23.618317 kubelet[2230]: I0905 23:51:23.618298 2230 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 5 23:51:23.618989 kubelet[2230]: I0905 23:51:23.618453 2230 reconciler.go:26] "Reconciler: start to sync state" Sep 5 23:51:23.619385 kubelet[2230]: I0905 23:51:23.619327 2230 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 23:51:23.619908 kubelet[2230]: E0905 23:51:23.619880 2230 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://128.140.56.156:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 128.140.56.156:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 5 23:51:23.620871 kubelet[2230]: E0905 23:51:23.620843 2230 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://128.140.56.156:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-6045d3ec0a?timeout=10s\": dial tcp 128.140.56.156:6443: connect: connection refused" interval="200ms" Sep 5 23:51:23.621109 kubelet[2230]: I0905 23:51:23.621093 2230 factory.go:223] Registration of the containerd container factory successfully Sep 5 23:51:23.621217 kubelet[2230]: I0905 23:51:23.621206 2230 factory.go:223] Registration of the systemd container factory successfully Sep 5 23:51:23.635465 kubelet[2230]: I0905 23:51:23.635381 2230 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 5 23:51:23.636853 kubelet[2230]: I0905 23:51:23.636739 2230 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 5 23:51:23.636853 kubelet[2230]: I0905 23:51:23.636772 2230 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 5 23:51:23.636853 kubelet[2230]: I0905 23:51:23.636797 2230 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 5 23:51:23.636853 kubelet[2230]: I0905 23:51:23.636805 2230 kubelet.go:2436] "Starting kubelet main sync loop" Sep 5 23:51:23.637010 kubelet[2230]: E0905 23:51:23.636881 2230 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 23:51:23.651264 kubelet[2230]: E0905 23:51:23.651153 2230 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://128.140.56.156:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 128.140.56.156:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 5 23:51:23.654038 kubelet[2230]: I0905 23:51:23.653944 2230 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 5 23:51:23.654038 kubelet[2230]: I0905 23:51:23.654033 2230 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 5 23:51:23.654216 kubelet[2230]: I0905 23:51:23.654056 2230 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:51:23.656770 kubelet[2230]: I0905 23:51:23.656739 2230 policy_none.go:49] "None policy: Start" Sep 5 23:51:23.656770 kubelet[2230]: I0905 23:51:23.656770 2230 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 5 23:51:23.656898 kubelet[2230]: I0905 23:51:23.656784 2230 state_mem.go:35] "Initializing new in-memory state store" Sep 5 23:51:23.663933 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 5 23:51:23.678109 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 5 23:51:23.682928 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 5 23:51:23.692954 kubelet[2230]: E0905 23:51:23.691791 2230 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 5 23:51:23.692954 kubelet[2230]: I0905 23:51:23.692112 2230 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 23:51:23.692954 kubelet[2230]: I0905 23:51:23.692169 2230 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 23:51:23.692954 kubelet[2230]: I0905 23:51:23.692747 2230 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 23:51:23.695231 kubelet[2230]: E0905 23:51:23.695205 2230 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 5 23:51:23.695411 kubelet[2230]: E0905 23:51:23.695396 2230 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-5-n-6045d3ec0a\" not found" Sep 5 23:51:23.752671 systemd[1]: Created slice kubepods-burstable-podbadb218c57c7dd5b5fad8db5f5643e05.slice - libcontainer container kubepods-burstable-podbadb218c57c7dd5b5fad8db5f5643e05.slice. Sep 5 23:51:23.762884 kubelet[2230]: E0905 23:51:23.762818 2230 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-6045d3ec0a\" not found" node="ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:23.768629 systemd[1]: Created slice kubepods-burstable-podda6fecbcb8dc072c3e8a55660fb2e8fa.slice - libcontainer container kubepods-burstable-podda6fecbcb8dc072c3e8a55660fb2e8fa.slice. Sep 5 23:51:23.775081 kubelet[2230]: E0905 23:51:23.775048 2230 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-6045d3ec0a\" not found" node="ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:23.778616 systemd[1]: Created slice kubepods-burstable-pod3ffba270b817aa9142f441744273a425.slice - libcontainer container kubepods-burstable-pod3ffba270b817aa9142f441744273a425.slice. Sep 5 23:51:23.781601 kubelet[2230]: E0905 23:51:23.781549 2230 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-6045d3ec0a\" not found" node="ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:23.795892 kubelet[2230]: I0905 23:51:23.795723 2230 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:23.796849 kubelet[2230]: E0905 23:51:23.796579 2230 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://128.140.56.156:6443/api/v1/nodes\": dial tcp 128.140.56.156:6443: connect: connection refused" node="ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:23.821900 kubelet[2230]: E0905 23:51:23.821821 2230 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://128.140.56.156:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-6045d3ec0a?timeout=10s\": dial tcp 128.140.56.156:6443: connect: connection refused" interval="400ms" Sep 5 23:51:23.919371 kubelet[2230]: I0905 23:51:23.919311 2230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/da6fecbcb8dc072c3e8a55660fb2e8fa-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-6045d3ec0a\" (UID: \"da6fecbcb8dc072c3e8a55660fb2e8fa\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:23.919371 kubelet[2230]: I0905 23:51:23.919392 2230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/da6fecbcb8dc072c3e8a55660fb2e8fa-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-n-6045d3ec0a\" (UID: \"da6fecbcb8dc072c3e8a55660fb2e8fa\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:23.919609 kubelet[2230]: I0905 23:51:23.919437 2230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3ffba270b817aa9142f441744273a425-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-n-6045d3ec0a\" (UID: \"3ffba270b817aa9142f441744273a425\") " pod="kube-system/kube-scheduler-ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:23.919609 kubelet[2230]: I0905 23:51:23.919476 2230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/badb218c57c7dd5b5fad8db5f5643e05-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-n-6045d3ec0a\" (UID: \"badb218c57c7dd5b5fad8db5f5643e05\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:23.919609 kubelet[2230]: I0905 23:51:23.919516 2230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/da6fecbcb8dc072c3e8a55660fb2e8fa-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-6045d3ec0a\" (UID: \"da6fecbcb8dc072c3e8a55660fb2e8fa\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:23.919609 kubelet[2230]: I0905 23:51:23.919549 2230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/da6fecbcb8dc072c3e8a55660fb2e8fa-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-n-6045d3ec0a\" (UID: \"da6fecbcb8dc072c3e8a55660fb2e8fa\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:23.919609 kubelet[2230]: I0905 23:51:23.919592 2230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/da6fecbcb8dc072c3e8a55660fb2e8fa-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-n-6045d3ec0a\" (UID: \"da6fecbcb8dc072c3e8a55660fb2e8fa\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:23.919933 kubelet[2230]: I0905 23:51:23.919625 2230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/badb218c57c7dd5b5fad8db5f5643e05-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-n-6045d3ec0a\" (UID: \"badb218c57c7dd5b5fad8db5f5643e05\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:23.919933 kubelet[2230]: I0905 23:51:23.919662 2230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/badb218c57c7dd5b5fad8db5f5643e05-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-n-6045d3ec0a\" (UID: \"badb218c57c7dd5b5fad8db5f5643e05\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:24.000620 kubelet[2230]: I0905 23:51:23.999843 2230 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:24.000620 kubelet[2230]: E0905 23:51:24.000579 2230 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://128.140.56.156:6443/api/v1/nodes\": dial tcp 128.140.56.156:6443: connect: connection refused" node="ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:24.065976 containerd[1484]: time="2025-09-05T23:51:24.065552608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-n-6045d3ec0a,Uid:badb218c57c7dd5b5fad8db5f5643e05,Namespace:kube-system,Attempt:0,}" Sep 5 23:51:24.078791 containerd[1484]: time="2025-09-05T23:51:24.078328723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-n-6045d3ec0a,Uid:da6fecbcb8dc072c3e8a55660fb2e8fa,Namespace:kube-system,Attempt:0,}" Sep 5 23:51:24.082927 containerd[1484]: time="2025-09-05T23:51:24.082873985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-n-6045d3ec0a,Uid:3ffba270b817aa9142f441744273a425,Namespace:kube-system,Attempt:0,}" Sep 5 23:51:24.223296 kubelet[2230]: E0905 23:51:24.223212 2230 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://128.140.56.156:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-6045d3ec0a?timeout=10s\": dial tcp 128.140.56.156:6443: connect: connection refused" interval="800ms" Sep 5 23:51:24.404192 kubelet[2230]: I0905 23:51:24.403832 2230 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:24.404611 kubelet[2230]: E0905 23:51:24.404559 2230 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://128.140.56.156:6443/api/v1/nodes\": dial tcp 128.140.56.156:6443: connect: connection refused" node="ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:24.415557 kubelet[2230]: E0905 23:51:24.415461 2230 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://128.140.56.156:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-6045d3ec0a&limit=500&resourceVersion=0\": dial tcp 128.140.56.156:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 5 23:51:24.601537 kubelet[2230]: E0905 23:51:24.601140 2230 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://128.140.56.156:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 128.140.56.156:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 5 23:51:24.609506 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount899452736.mount: Deactivated successfully. Sep 5 23:51:24.617966 containerd[1484]: time="2025-09-05T23:51:24.616963061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:51:24.621066 containerd[1484]: time="2025-09-05T23:51:24.619554184Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Sep 5 23:51:24.631164 containerd[1484]: time="2025-09-05T23:51:24.630702850Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:51:24.633866 containerd[1484]: time="2025-09-05T23:51:24.633147423Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:51:24.635145 containerd[1484]: time="2025-09-05T23:51:24.635090525Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 23:51:24.638135 containerd[1484]: time="2025-09-05T23:51:24.637724273Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:51:24.640270 containerd[1484]: time="2025-09-05T23:51:24.640229825Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 23:51:24.643028 containerd[1484]: time="2025-09-05T23:51:24.642772964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:51:24.644849 containerd[1484]: time="2025-09-05T23:51:24.644779005Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 561.806533ms" Sep 5 23:51:24.645859 containerd[1484]: time="2025-09-05T23:51:24.645808497Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 580.155202ms" Sep 5 23:51:24.646970 containerd[1484]: time="2025-09-05T23:51:24.646907045Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 568.420735ms" Sep 5 23:51:24.778046 containerd[1484]: time="2025-09-05T23:51:24.777763866Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:51:24.778046 containerd[1484]: time="2025-09-05T23:51:24.777827125Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:51:24.778046 containerd[1484]: time="2025-09-05T23:51:24.777874709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:24.779040 containerd[1484]: time="2025-09-05T23:51:24.778752052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:24.782784 containerd[1484]: time="2025-09-05T23:51:24.782688919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:51:24.783047 containerd[1484]: time="2025-09-05T23:51:24.782804760Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:51:24.783047 containerd[1484]: time="2025-09-05T23:51:24.782818995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:24.783047 containerd[1484]: time="2025-09-05T23:51:24.782973343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:24.788052 containerd[1484]: time="2025-09-05T23:51:24.787844973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:51:24.788052 containerd[1484]: time="2025-09-05T23:51:24.787904353Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:51:24.788052 containerd[1484]: time="2025-09-05T23:51:24.787943500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:24.788558 containerd[1484]: time="2025-09-05T23:51:24.788394587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:24.807353 systemd[1]: Started cri-containerd-68cfd08576a44090534b35f8f6467cb66bb47bf86013405660718573cb1b7167.scope - libcontainer container 68cfd08576a44090534b35f8f6467cb66bb47bf86013405660718573cb1b7167. Sep 5 23:51:24.811360 systemd[1]: Started cri-containerd-98d52ca5740e0d7116ab4c7f8ac7dac6749767e1bafd69600739ff1133171bcb.scope - libcontainer container 98d52ca5740e0d7116ab4c7f8ac7dac6749767e1bafd69600739ff1133171bcb. Sep 5 23:51:24.830356 systemd[1]: Started cri-containerd-0b21bb091ceaf7d752881f21ae8fe5d6aa8e7d9b0c78ec29796b0234059758e3.scope - libcontainer container 0b21bb091ceaf7d752881f21ae8fe5d6aa8e7d9b0c78ec29796b0234059758e3. Sep 5 23:51:24.870300 containerd[1484]: time="2025-09-05T23:51:24.869710740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-n-6045d3ec0a,Uid:badb218c57c7dd5b5fad8db5f5643e05,Namespace:kube-system,Attempt:0,} returns sandbox id \"68cfd08576a44090534b35f8f6467cb66bb47bf86013405660718573cb1b7167\"" Sep 5 23:51:24.883616 containerd[1484]: time="2025-09-05T23:51:24.883566129Z" level=info msg="CreateContainer within sandbox \"68cfd08576a44090534b35f8f6467cb66bb47bf86013405660718573cb1b7167\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 5 23:51:24.892501 containerd[1484]: time="2025-09-05T23:51:24.892362032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-n-6045d3ec0a,Uid:3ffba270b817aa9142f441744273a425,Namespace:kube-system,Attempt:0,} returns sandbox id \"98d52ca5740e0d7116ab4c7f8ac7dac6749767e1bafd69600739ff1133171bcb\"" Sep 5 23:51:24.896322 containerd[1484]: time="2025-09-05T23:51:24.896086491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-n-6045d3ec0a,Uid:da6fecbcb8dc072c3e8a55660fb2e8fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b21bb091ceaf7d752881f21ae8fe5d6aa8e7d9b0c78ec29796b0234059758e3\"" Sep 5 23:51:24.901090 containerd[1484]: time="2025-09-05T23:51:24.900794777Z" level=info msg="CreateContainer within sandbox \"98d52ca5740e0d7116ab4c7f8ac7dac6749767e1bafd69600739ff1133171bcb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 5 23:51:24.903372 containerd[1484]: time="2025-09-05T23:51:24.903323721Z" level=info msg="CreateContainer within sandbox \"0b21bb091ceaf7d752881f21ae8fe5d6aa8e7d9b0c78ec29796b0234059758e3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 5 23:51:24.904357 containerd[1484]: time="2025-09-05T23:51:24.904311307Z" level=info msg="CreateContainer within sandbox \"68cfd08576a44090534b35f8f6467cb66bb47bf86013405660718573cb1b7167\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e6eebddaccbb150344db1feef289dc8ec74922eeee1dae1959c00a765394c238\"" Sep 5 23:51:24.905958 containerd[1484]: time="2025-09-05T23:51:24.905904647Z" level=info msg="StartContainer for \"e6eebddaccbb150344db1feef289dc8ec74922eeee1dae1959c00a765394c238\"" Sep 5 23:51:24.918710 containerd[1484]: time="2025-09-05T23:51:24.918308208Z" level=info msg="CreateContainer within sandbox \"98d52ca5740e0d7116ab4c7f8ac7dac6749767e1bafd69600739ff1133171bcb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"64ac25581b618bc193eaa13c196c59acff6da75bbe0ead4f0fff64d9f603995a\"" Sep 5 23:51:24.920076 containerd[1484]: time="2025-09-05T23:51:24.919013010Z" level=info msg="StartContainer for \"64ac25581b618bc193eaa13c196c59acff6da75bbe0ead4f0fff64d9f603995a\"" Sep 5 23:51:24.921796 containerd[1484]: time="2025-09-05T23:51:24.921744445Z" level=info msg="CreateContainer within sandbox \"0b21bb091ceaf7d752881f21ae8fe5d6aa8e7d9b0c78ec29796b0234059758e3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"55d1f97ccdabefaa0b3a2545b9cacb44c3b9bf659b95fe868108a15683f55607\"" Sep 5 23:51:24.922354 containerd[1484]: time="2025-09-05T23:51:24.922319530Z" level=info msg="StartContainer for \"55d1f97ccdabefaa0b3a2545b9cacb44c3b9bf659b95fe868108a15683f55607\"" Sep 5 23:51:24.945789 systemd[1]: Started cri-containerd-e6eebddaccbb150344db1feef289dc8ec74922eeee1dae1959c00a765394c238.scope - libcontainer container e6eebddaccbb150344db1feef289dc8ec74922eeee1dae1959c00a765394c238. Sep 5 23:51:24.969526 systemd[1]: Started cri-containerd-64ac25581b618bc193eaa13c196c59acff6da75bbe0ead4f0fff64d9f603995a.scope - libcontainer container 64ac25581b618bc193eaa13c196c59acff6da75bbe0ead4f0fff64d9f603995a. Sep 5 23:51:24.975562 kubelet[2230]: E0905 23:51:24.975274 2230 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://128.140.56.156:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 128.140.56.156:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 5 23:51:24.984365 systemd[1]: Started cri-containerd-55d1f97ccdabefaa0b3a2545b9cacb44c3b9bf659b95fe868108a15683f55607.scope - libcontainer container 55d1f97ccdabefaa0b3a2545b9cacb44c3b9bf659b95fe868108a15683f55607. Sep 5 23:51:25.023000 containerd[1484]: time="2025-09-05T23:51:25.020934545Z" level=info msg="StartContainer for \"e6eebddaccbb150344db1feef289dc8ec74922eeee1dae1959c00a765394c238\" returns successfully" Sep 5 23:51:25.024429 kubelet[2230]: E0905 23:51:25.023897 2230 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://128.140.56.156:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-6045d3ec0a?timeout=10s\": dial tcp 128.140.56.156:6443: connect: connection refused" interval="1.6s" Sep 5 23:51:25.039369 containerd[1484]: time="2025-09-05T23:51:25.038853318Z" level=info msg="StartContainer for \"64ac25581b618bc193eaa13c196c59acff6da75bbe0ead4f0fff64d9f603995a\" returns successfully" Sep 5 23:51:25.067403 containerd[1484]: time="2025-09-05T23:51:25.067096992Z" level=info msg="StartContainer for \"55d1f97ccdabefaa0b3a2545b9cacb44c3b9bf659b95fe868108a15683f55607\" returns successfully" Sep 5 23:51:25.209190 kubelet[2230]: I0905 23:51:25.207419 2230 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:25.660731 kubelet[2230]: E0905 23:51:25.660499 2230 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-6045d3ec0a\" not found" node="ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:25.662885 kubelet[2230]: E0905 23:51:25.662850 2230 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-6045d3ec0a\" not found" node="ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:25.665895 kubelet[2230]: E0905 23:51:25.665866 2230 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-6045d3ec0a\" not found" node="ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:26.669594 kubelet[2230]: E0905 23:51:26.669558 2230 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-6045d3ec0a\" not found" node="ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:26.669907 kubelet[2230]: E0905 23:51:26.669855 2230 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-6045d3ec0a\" not found" node="ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:27.183112 kubelet[2230]: E0905 23:51:27.183057 2230 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-5-n-6045d3ec0a\" not found" node="ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:27.289227 kubelet[2230]: I0905 23:51:27.289174 2230 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:27.320797 kubelet[2230]: I0905 23:51:27.320751 2230 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:27.385825 kubelet[2230]: E0905 23:51:27.385767 2230 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-5-n-6045d3ec0a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:27.385825 kubelet[2230]: I0905 23:51:27.385808 2230 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:27.393156 kubelet[2230]: E0905 23:51:27.391552 2230 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-5-n-6045d3ec0a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:27.393156 kubelet[2230]: I0905 23:51:27.391587 2230 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:27.394009 kubelet[2230]: E0905 23:51:27.393974 2230 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-5-n-6045d3ec0a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:27.463860 kubelet[2230]: I0905 23:51:27.463392 2230 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:27.466299 kubelet[2230]: E0905 23:51:27.466197 2230 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-5-n-6045d3ec0a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:27.599078 kubelet[2230]: I0905 23:51:27.598771 2230 apiserver.go:52] "Watching apiserver" Sep 5 23:51:27.618637 kubelet[2230]: I0905 23:51:27.618580 2230 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 5 23:51:28.853750 kubelet[2230]: I0905 23:51:28.853711 2230 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:29.473080 systemd[1]: Reloading requested from client PID 2509 ('systemctl') (unit session-7.scope)... Sep 5 23:51:29.473100 systemd[1]: Reloading... Sep 5 23:51:29.570164 zram_generator::config[2554]: No configuration found. Sep 5 23:51:29.677237 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:51:29.762863 systemd[1]: Reloading finished in 289 ms. Sep 5 23:51:29.806890 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:51:29.807507 kubelet[2230]: I0905 23:51:29.807371 2230 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 23:51:29.821995 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 23:51:29.822337 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:51:29.822403 systemd[1]: kubelet.service: Consumed 1.061s CPU time, 128.0M memory peak, 0B memory swap peak. Sep 5 23:51:29.831646 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:51:29.957472 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:51:29.969834 (kubelet)[2596]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 23:51:30.023696 kubelet[2596]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:51:30.025678 kubelet[2596]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 5 23:51:30.025678 kubelet[2596]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:51:30.025678 kubelet[2596]: I0905 23:51:30.024180 2596 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 23:51:30.035153 kubelet[2596]: I0905 23:51:30.034700 2596 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 5 23:51:30.036163 kubelet[2596]: I0905 23:51:30.035428 2596 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 23:51:30.036290 kubelet[2596]: I0905 23:51:30.036266 2596 server.go:956] "Client rotation is on, will bootstrap in background" Sep 5 23:51:30.038227 kubelet[2596]: I0905 23:51:30.038033 2596 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 5 23:51:30.041859 kubelet[2596]: I0905 23:51:30.041795 2596 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 23:51:30.047171 kubelet[2596]: E0905 23:51:30.046356 2596 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 23:51:30.047171 kubelet[2596]: I0905 23:51:30.046386 2596 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 23:51:30.048490 kubelet[2596]: I0905 23:51:30.048462 2596 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 23:51:30.049389 kubelet[2596]: I0905 23:51:30.048753 2596 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 23:51:30.049389 kubelet[2596]: I0905 23:51:30.048783 2596 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-n-6045d3ec0a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 23:51:30.049389 kubelet[2596]: I0905 23:51:30.048972 2596 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 23:51:30.049389 kubelet[2596]: I0905 23:51:30.048994 2596 container_manager_linux.go:303] "Creating device plugin manager" Sep 5 23:51:30.049389 kubelet[2596]: I0905 23:51:30.049038 2596 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:51:30.049676 kubelet[2596]: I0905 23:51:30.049367 2596 kubelet.go:480] "Attempting to sync node with API server" Sep 5 23:51:30.049676 kubelet[2596]: I0905 23:51:30.049383 2596 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 23:51:30.050137 kubelet[2596]: I0905 23:51:30.050027 2596 kubelet.go:386] "Adding apiserver pod source" Sep 5 23:51:30.050137 kubelet[2596]: I0905 23:51:30.050059 2596 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 23:51:30.056676 kubelet[2596]: I0905 23:51:30.056640 2596 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 5 23:51:30.057669 kubelet[2596]: I0905 23:51:30.057523 2596 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 5 23:51:30.065654 kubelet[2596]: I0905 23:51:30.065629 2596 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 5 23:51:30.065943 kubelet[2596]: I0905 23:51:30.065842 2596 server.go:1289] "Started kubelet" Sep 5 23:51:30.068178 kubelet[2596]: I0905 23:51:30.068157 2596 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 23:51:30.076069 kubelet[2596]: I0905 23:51:30.072929 2596 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 23:51:30.076069 kubelet[2596]: I0905 23:51:30.073788 2596 server.go:317] "Adding debug handlers to kubelet server" Sep 5 23:51:30.080023 kubelet[2596]: I0905 23:51:30.079730 2596 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 23:51:30.080638 kubelet[2596]: I0905 23:51:30.080617 2596 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 23:51:30.081059 kubelet[2596]: I0905 23:51:30.081044 2596 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 23:51:30.082764 kubelet[2596]: I0905 23:51:30.082748 2596 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 5 23:51:30.083093 kubelet[2596]: E0905 23:51:30.083078 2596 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-6045d3ec0a\" not found" Sep 5 23:51:30.085182 kubelet[2596]: I0905 23:51:30.085166 2596 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 5 23:51:30.085387 kubelet[2596]: I0905 23:51:30.085375 2596 reconciler.go:26] "Reconciler: start to sync state" Sep 5 23:51:30.098689 kubelet[2596]: I0905 23:51:30.098631 2596 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 5 23:51:30.101044 kubelet[2596]: I0905 23:51:30.100419 2596 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 5 23:51:30.101044 kubelet[2596]: I0905 23:51:30.100459 2596 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 5 23:51:30.101044 kubelet[2596]: I0905 23:51:30.100477 2596 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 5 23:51:30.101044 kubelet[2596]: I0905 23:51:30.100486 2596 kubelet.go:2436] "Starting kubelet main sync loop" Sep 5 23:51:30.101044 kubelet[2596]: E0905 23:51:30.100525 2596 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 23:51:30.110537 kubelet[2596]: I0905 23:51:30.110507 2596 factory.go:223] Registration of the systemd container factory successfully Sep 5 23:51:30.110779 kubelet[2596]: I0905 23:51:30.110759 2596 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 23:51:30.113796 kubelet[2596]: I0905 23:51:30.113772 2596 factory.go:223] Registration of the containerd container factory successfully Sep 5 23:51:30.166603 kubelet[2596]: I0905 23:51:30.166579 2596 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 5 23:51:30.166767 kubelet[2596]: I0905 23:51:30.166754 2596 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 5 23:51:30.166841 kubelet[2596]: I0905 23:51:30.166833 2596 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:51:30.167063 kubelet[2596]: I0905 23:51:30.167045 2596 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 5 23:51:30.167231 kubelet[2596]: I0905 23:51:30.167171 2596 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 5 23:51:30.167296 kubelet[2596]: I0905 23:51:30.167288 2596 policy_none.go:49] "None policy: Start" Sep 5 23:51:30.167366 kubelet[2596]: I0905 23:51:30.167348 2596 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 5 23:51:30.167419 kubelet[2596]: I0905 23:51:30.167411 2596 state_mem.go:35] "Initializing new in-memory state store" Sep 5 23:51:30.167591 kubelet[2596]: I0905 23:51:30.167581 2596 state_mem.go:75] "Updated machine memory state" Sep 5 23:51:30.172384 kubelet[2596]: E0905 23:51:30.172359 2596 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 5 23:51:30.172713 kubelet[2596]: I0905 23:51:30.172696 2596 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 23:51:30.172815 kubelet[2596]: I0905 23:51:30.172782 2596 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 23:51:30.173284 kubelet[2596]: I0905 23:51:30.173248 2596 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 23:51:30.175975 kubelet[2596]: E0905 23:51:30.175955 2596 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 5 23:51:30.202338 kubelet[2596]: I0905 23:51:30.202308 2596 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:30.202837 kubelet[2596]: I0905 23:51:30.202820 2596 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:30.203131 kubelet[2596]: I0905 23:51:30.202955 2596 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:30.214336 kubelet[2596]: E0905 23:51:30.214164 2596 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-5-n-6045d3ec0a\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:30.278227 kubelet[2596]: I0905 23:51:30.276578 2596 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:30.286454 kubelet[2596]: I0905 23:51:30.286414 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/da6fecbcb8dc072c3e8a55660fb2e8fa-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-n-6045d3ec0a\" (UID: \"da6fecbcb8dc072c3e8a55660fb2e8fa\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:30.286723 kubelet[2596]: I0905 23:51:30.286703 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3ffba270b817aa9142f441744273a425-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-n-6045d3ec0a\" (UID: \"3ffba270b817aa9142f441744273a425\") " pod="kube-system/kube-scheduler-ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:30.286834 kubelet[2596]: I0905 23:51:30.286816 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/da6fecbcb8dc072c3e8a55660fb2e8fa-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-n-6045d3ec0a\" (UID: \"da6fecbcb8dc072c3e8a55660fb2e8fa\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:30.287018 kubelet[2596]: I0905 23:51:30.286976 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/badb218c57c7dd5b5fad8db5f5643e05-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-n-6045d3ec0a\" (UID: \"badb218c57c7dd5b5fad8db5f5643e05\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:30.287106 kubelet[2596]: I0905 23:51:30.287049 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/badb218c57c7dd5b5fad8db5f5643e05-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-n-6045d3ec0a\" (UID: \"badb218c57c7dd5b5fad8db5f5643e05\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:30.287266 kubelet[2596]: I0905 23:51:30.287096 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/badb218c57c7dd5b5fad8db5f5643e05-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-n-6045d3ec0a\" (UID: \"badb218c57c7dd5b5fad8db5f5643e05\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:30.287338 kubelet[2596]: I0905 23:51:30.287309 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/da6fecbcb8dc072c3e8a55660fb2e8fa-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-6045d3ec0a\" (UID: \"da6fecbcb8dc072c3e8a55660fb2e8fa\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:30.287417 kubelet[2596]: I0905 23:51:30.287363 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/da6fecbcb8dc072c3e8a55660fb2e8fa-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-n-6045d3ec0a\" (UID: \"da6fecbcb8dc072c3e8a55660fb2e8fa\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:30.287473 kubelet[2596]: I0905 23:51:30.287415 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/da6fecbcb8dc072c3e8a55660fb2e8fa-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-6045d3ec0a\" (UID: \"da6fecbcb8dc072c3e8a55660fb2e8fa\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:30.292754 kubelet[2596]: I0905 23:51:30.292271 2596 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:30.292754 kubelet[2596]: I0905 23:51:30.292420 2596 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:30.468820 sudo[2636]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 5 23:51:30.469643 sudo[2636]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 5 23:51:30.906395 sudo[2636]: pam_unix(sudo:session): session closed for user root Sep 5 23:51:31.065132 kubelet[2596]: I0905 23:51:31.063080 2596 apiserver.go:52] "Watching apiserver" Sep 5 23:51:31.085677 kubelet[2596]: I0905 23:51:31.085629 2596 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 5 23:51:31.141856 kubelet[2596]: I0905 23:51:31.141733 2596 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:31.160585 kubelet[2596]: E0905 23:51:31.157337 2596 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-5-n-6045d3ec0a\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-5-n-6045d3ec0a" Sep 5 23:51:31.171947 kubelet[2596]: I0905 23:51:31.171871 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-5-n-6045d3ec0a" podStartSLOduration=1.171856474 podStartE2EDuration="1.171856474s" podCreationTimestamp="2025-09-05 23:51:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:51:31.171822395 +0000 UTC m=+1.193673676" watchObservedRunningTime="2025-09-05 23:51:31.171856474 +0000 UTC m=+1.193707715" Sep 5 23:51:31.185562 kubelet[2596]: I0905 23:51:31.185490 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-5-n-6045d3ec0a" podStartSLOduration=3.185469986 podStartE2EDuration="3.185469986s" podCreationTimestamp="2025-09-05 23:51:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:51:31.184373753 +0000 UTC m=+1.206225034" watchObservedRunningTime="2025-09-05 23:51:31.185469986 +0000 UTC m=+1.207321267" Sep 5 23:51:31.222848 kubelet[2596]: I0905 23:51:31.222766 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-5-n-6045d3ec0a" podStartSLOduration=1.2227374229999999 podStartE2EDuration="1.222737423s" podCreationTimestamp="2025-09-05 23:51:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:51:31.204947659 +0000 UTC m=+1.226798940" watchObservedRunningTime="2025-09-05 23:51:31.222737423 +0000 UTC m=+1.244588744" Sep 5 23:51:32.865043 sudo[1723]: pam_unix(sudo:session): session closed for user root Sep 5 23:51:33.029492 sshd[1720]: pam_unix(sshd:session): session closed for user core Sep 5 23:51:33.036268 systemd-logind[1460]: Session 7 logged out. Waiting for processes to exit. Sep 5 23:51:33.037698 systemd[1]: sshd@8-128.140.56.156:22-139.178.68.195:49738.service: Deactivated successfully. Sep 5 23:51:33.040373 systemd[1]: session-7.scope: Deactivated successfully. Sep 5 23:51:33.040764 systemd[1]: session-7.scope: Consumed 8.312s CPU time, 152.5M memory peak, 0B memory swap peak. Sep 5 23:51:33.041717 systemd-logind[1460]: Removed session 7. Sep 5 23:51:36.387720 kubelet[2596]: I0905 23:51:36.387299 2596 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 5 23:51:36.388093 containerd[1484]: time="2025-09-05T23:51:36.387615273Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 5 23:51:36.388892 kubelet[2596]: I0905 23:51:36.388592 2596 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 5 23:51:37.073509 systemd[1]: Created slice kubepods-besteffort-pod07fea7c5_c31c_4761_b8dd_e91b4e57f078.slice - libcontainer container kubepods-besteffort-pod07fea7c5_c31c_4761_b8dd_e91b4e57f078.slice. Sep 5 23:51:37.102724 systemd[1]: Created slice kubepods-burstable-podff5e92fd_da71_4009_afe9_0eef1ae950e6.slice - libcontainer container kubepods-burstable-podff5e92fd_da71_4009_afe9_0eef1ae950e6.slice. Sep 5 23:51:37.127166 kubelet[2596]: I0905 23:51:37.126890 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ff5e92fd-da71-4009-afe9-0eef1ae950e6-hostproc\") pod \"cilium-zl74v\" (UID: \"ff5e92fd-da71-4009-afe9-0eef1ae950e6\") " pod="kube-system/cilium-zl74v" Sep 5 23:51:37.127166 kubelet[2596]: I0905 23:51:37.126931 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff5e92fd-da71-4009-afe9-0eef1ae950e6-xtables-lock\") pod \"cilium-zl74v\" (UID: \"ff5e92fd-da71-4009-afe9-0eef1ae950e6\") " pod="kube-system/cilium-zl74v" Sep 5 23:51:37.127166 kubelet[2596]: I0905 23:51:37.126954 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/07fea7c5-c31c-4761-b8dd-e91b4e57f078-lib-modules\") pod \"kube-proxy-qnvcs\" (UID: \"07fea7c5-c31c-4761-b8dd-e91b4e57f078\") " pod="kube-system/kube-proxy-qnvcs" Sep 5 23:51:37.127166 kubelet[2596]: I0905 23:51:37.126975 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ff5e92fd-da71-4009-afe9-0eef1ae950e6-cilium-run\") pod \"cilium-zl74v\" (UID: \"ff5e92fd-da71-4009-afe9-0eef1ae950e6\") " pod="kube-system/cilium-zl74v" Sep 5 23:51:37.127166 kubelet[2596]: I0905 23:51:37.126990 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ff5e92fd-da71-4009-afe9-0eef1ae950e6-cilium-cgroup\") pod \"cilium-zl74v\" (UID: \"ff5e92fd-da71-4009-afe9-0eef1ae950e6\") " pod="kube-system/cilium-zl74v" Sep 5 23:51:37.127166 kubelet[2596]: I0905 23:51:37.127004 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ff5e92fd-da71-4009-afe9-0eef1ae950e6-etc-cni-netd\") pod \"cilium-zl74v\" (UID: \"ff5e92fd-da71-4009-afe9-0eef1ae950e6\") " pod="kube-system/cilium-zl74v" Sep 5 23:51:37.127436 kubelet[2596]: I0905 23:51:37.127022 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff5e92fd-da71-4009-afe9-0eef1ae950e6-lib-modules\") pod \"cilium-zl74v\" (UID: \"ff5e92fd-da71-4009-afe9-0eef1ae950e6\") " pod="kube-system/cilium-zl74v" Sep 5 23:51:37.127436 kubelet[2596]: I0905 23:51:37.127039 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ff5e92fd-da71-4009-afe9-0eef1ae950e6-clustermesh-secrets\") pod \"cilium-zl74v\" (UID: \"ff5e92fd-da71-4009-afe9-0eef1ae950e6\") " pod="kube-system/cilium-zl74v" Sep 5 23:51:37.127436 kubelet[2596]: I0905 23:51:37.127082 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rmrx\" (UniqueName: \"kubernetes.io/projected/07fea7c5-c31c-4761-b8dd-e91b4e57f078-kube-api-access-5rmrx\") pod \"kube-proxy-qnvcs\" (UID: \"07fea7c5-c31c-4761-b8dd-e91b4e57f078\") " pod="kube-system/kube-proxy-qnvcs" Sep 5 23:51:37.127436 kubelet[2596]: I0905 23:51:37.127101 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/07fea7c5-c31c-4761-b8dd-e91b4e57f078-kube-proxy\") pod \"kube-proxy-qnvcs\" (UID: \"07fea7c5-c31c-4761-b8dd-e91b4e57f078\") " pod="kube-system/kube-proxy-qnvcs" Sep 5 23:51:37.127436 kubelet[2596]: I0905 23:51:37.127132 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ff5e92fd-da71-4009-afe9-0eef1ae950e6-cni-path\") pod \"cilium-zl74v\" (UID: \"ff5e92fd-da71-4009-afe9-0eef1ae950e6\") " pod="kube-system/cilium-zl74v" Sep 5 23:51:37.127969 kubelet[2596]: I0905 23:51:37.127587 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ff5e92fd-da71-4009-afe9-0eef1ae950e6-cilium-config-path\") pod \"cilium-zl74v\" (UID: \"ff5e92fd-da71-4009-afe9-0eef1ae950e6\") " pod="kube-system/cilium-zl74v" Sep 5 23:51:37.127969 kubelet[2596]: I0905 23:51:37.127617 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ff5e92fd-da71-4009-afe9-0eef1ae950e6-host-proc-sys-net\") pod \"cilium-zl74v\" (UID: \"ff5e92fd-da71-4009-afe9-0eef1ae950e6\") " pod="kube-system/cilium-zl74v" Sep 5 23:51:37.127969 kubelet[2596]: I0905 23:51:37.127830 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhxl8\" (UniqueName: \"kubernetes.io/projected/ff5e92fd-da71-4009-afe9-0eef1ae950e6-kube-api-access-rhxl8\") pod \"cilium-zl74v\" (UID: \"ff5e92fd-da71-4009-afe9-0eef1ae950e6\") " pod="kube-system/cilium-zl74v" Sep 5 23:51:37.127969 kubelet[2596]: I0905 23:51:37.127855 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ff5e92fd-da71-4009-afe9-0eef1ae950e6-bpf-maps\") pod \"cilium-zl74v\" (UID: \"ff5e92fd-da71-4009-afe9-0eef1ae950e6\") " pod="kube-system/cilium-zl74v" Sep 5 23:51:37.127969 kubelet[2596]: I0905 23:51:37.127908 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ff5e92fd-da71-4009-afe9-0eef1ae950e6-host-proc-sys-kernel\") pod \"cilium-zl74v\" (UID: \"ff5e92fd-da71-4009-afe9-0eef1ae950e6\") " pod="kube-system/cilium-zl74v" Sep 5 23:51:37.128225 kubelet[2596]: I0905 23:51:37.127924 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ff5e92fd-da71-4009-afe9-0eef1ae950e6-hubble-tls\") pod \"cilium-zl74v\" (UID: \"ff5e92fd-da71-4009-afe9-0eef1ae950e6\") " pod="kube-system/cilium-zl74v" Sep 5 23:51:37.128361 kubelet[2596]: I0905 23:51:37.128294 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/07fea7c5-c31c-4761-b8dd-e91b4e57f078-xtables-lock\") pod \"kube-proxy-qnvcs\" (UID: \"07fea7c5-c31c-4761-b8dd-e91b4e57f078\") " pod="kube-system/kube-proxy-qnvcs" Sep 5 23:51:37.382845 containerd[1484]: time="2025-09-05T23:51:37.382738588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qnvcs,Uid:07fea7c5-c31c-4761-b8dd-e91b4e57f078,Namespace:kube-system,Attempt:0,}" Sep 5 23:51:37.408739 containerd[1484]: time="2025-09-05T23:51:37.408343427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zl74v,Uid:ff5e92fd-da71-4009-afe9-0eef1ae950e6,Namespace:kube-system,Attempt:0,}" Sep 5 23:51:37.414106 containerd[1484]: time="2025-09-05T23:51:37.414001480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:51:37.415176 containerd[1484]: time="2025-09-05T23:51:37.414077000Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:51:37.415176 containerd[1484]: time="2025-09-05T23:51:37.414092320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:37.415176 containerd[1484]: time="2025-09-05T23:51:37.414183400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:37.441624 containerd[1484]: time="2025-09-05T23:51:37.441365152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:51:37.441624 containerd[1484]: time="2025-09-05T23:51:37.441433031Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:51:37.441624 containerd[1484]: time="2025-09-05T23:51:37.441456991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:37.441624 containerd[1484]: time="2025-09-05T23:51:37.441548711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:37.459597 systemd[1]: Started cri-containerd-b577dfcad8a27287d04646d8d2b76d2d1cdae2968b3888a432d9165f41e4e262.scope - libcontainer container b577dfcad8a27287d04646d8d2b76d2d1cdae2968b3888a432d9165f41e4e262. Sep 5 23:51:37.465312 systemd[1]: Started cri-containerd-115f43ad54ea1be4448bd3b6241a615debf03a65a23e6d792843f4bb3c813def.scope - libcontainer container 115f43ad54ea1be4448bd3b6241a615debf03a65a23e6d792843f4bb3c813def. Sep 5 23:51:37.501017 containerd[1484]: time="2025-09-05T23:51:37.500930431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zl74v,Uid:ff5e92fd-da71-4009-afe9-0eef1ae950e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"115f43ad54ea1be4448bd3b6241a615debf03a65a23e6d792843f4bb3c813def\"" Sep 5 23:51:37.506180 containerd[1484]: time="2025-09-05T23:51:37.505869367Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 5 23:51:37.506784 containerd[1484]: time="2025-09-05T23:51:37.506750843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qnvcs,Uid:07fea7c5-c31c-4761-b8dd-e91b4e57f078,Namespace:kube-system,Attempt:0,} returns sandbox id \"b577dfcad8a27287d04646d8d2b76d2d1cdae2968b3888a432d9165f41e4e262\"" Sep 5 23:51:37.514582 containerd[1484]: time="2025-09-05T23:51:37.514543967Z" level=info msg="CreateContainer within sandbox \"b577dfcad8a27287d04646d8d2b76d2d1cdae2968b3888a432d9165f41e4e262\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 5 23:51:37.535603 containerd[1484]: time="2025-09-05T23:51:37.535420548Z" level=info msg="CreateContainer within sandbox \"b577dfcad8a27287d04646d8d2b76d2d1cdae2968b3888a432d9165f41e4e262\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7bb6584aed8306420f84889532eaa48a72bf2b591be9880f292ca452ac01023d\"" Sep 5 23:51:37.536326 containerd[1484]: time="2025-09-05T23:51:37.536291984Z" level=info msg="StartContainer for \"7bb6584aed8306420f84889532eaa48a72bf2b591be9880f292ca452ac01023d\"" Sep 5 23:51:37.569208 systemd[1]: Created slice kubepods-besteffort-pod43baeec1_51dd_45df_b7ee_b6b3faf1b6bd.slice - libcontainer container kubepods-besteffort-pod43baeec1_51dd_45df_b7ee_b6b3faf1b6bd.slice. Sep 5 23:51:37.602584 systemd[1]: Started cri-containerd-7bb6584aed8306420f84889532eaa48a72bf2b591be9880f292ca452ac01023d.scope - libcontainer container 7bb6584aed8306420f84889532eaa48a72bf2b591be9880f292ca452ac01023d. Sep 5 23:51:37.639111 kubelet[2596]: I0905 23:51:37.638971 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv54b\" (UniqueName: \"kubernetes.io/projected/43baeec1-51dd-45df-b7ee-b6b3faf1b6bd-kube-api-access-xv54b\") pod \"cilium-operator-6c4d7847fc-kf2k7\" (UID: \"43baeec1-51dd-45df-b7ee-b6b3faf1b6bd\") " pod="kube-system/cilium-operator-6c4d7847fc-kf2k7" Sep 5 23:51:37.639111 kubelet[2596]: I0905 23:51:37.639029 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/43baeec1-51dd-45df-b7ee-b6b3faf1b6bd-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-kf2k7\" (UID: \"43baeec1-51dd-45df-b7ee-b6b3faf1b6bd\") " pod="kube-system/cilium-operator-6c4d7847fc-kf2k7" Sep 5 23:51:37.659646 containerd[1484]: time="2025-09-05T23:51:37.659508483Z" level=info msg="StartContainer for \"7bb6584aed8306420f84889532eaa48a72bf2b591be9880f292ca452ac01023d\" returns successfully" Sep 5 23:51:37.876025 containerd[1484]: time="2025-09-05T23:51:37.875450546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-kf2k7,Uid:43baeec1-51dd-45df-b7ee-b6b3faf1b6bd,Namespace:kube-system,Attempt:0,}" Sep 5 23:51:37.901783 containerd[1484]: time="2025-09-05T23:51:37.901616342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:51:37.902213 containerd[1484]: time="2025-09-05T23:51:37.902185860Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:51:37.902427 containerd[1484]: time="2025-09-05T23:51:37.902288579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:37.902497 containerd[1484]: time="2025-09-05T23:51:37.902414499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:51:37.917346 systemd[1]: Started cri-containerd-cfc20d51423d4043ff71af19f3ec4794ea74c641568b1b108be7d498427247f1.scope - libcontainer container cfc20d51423d4043ff71af19f3ec4794ea74c641568b1b108be7d498427247f1. Sep 5 23:51:37.948578 containerd[1484]: time="2025-09-05T23:51:37.948439962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-kf2k7,Uid:43baeec1-51dd-45df-b7ee-b6b3faf1b6bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"cfc20d51423d4043ff71af19f3ec4794ea74c641568b1b108be7d498427247f1\"" Sep 5 23:51:38.174905 kubelet[2596]: I0905 23:51:38.174614 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qnvcs" podStartSLOduration=1.174599337 podStartE2EDuration="1.174599337s" podCreationTimestamp="2025-09-05 23:51:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:51:38.174282178 +0000 UTC m=+8.196133419" watchObservedRunningTime="2025-09-05 23:51:38.174599337 +0000 UTC m=+8.196450538" Sep 5 23:51:47.352619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3105639793.mount: Deactivated successfully. Sep 5 23:51:48.809883 containerd[1484]: time="2025-09-05T23:51:48.809799007Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:48.812431 containerd[1484]: time="2025-09-05T23:51:48.812165601Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157648041" Sep 5 23:51:48.814430 containerd[1484]: time="2025-09-05T23:51:48.814379955Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:48.817596 containerd[1484]: time="2025-09-05T23:51:48.817412066Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 11.311499779s" Sep 5 23:51:48.817596 containerd[1484]: time="2025-09-05T23:51:48.817463026Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 5 23:51:48.821085 containerd[1484]: time="2025-09-05T23:51:48.821044096Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 5 23:51:48.825110 containerd[1484]: time="2025-09-05T23:51:48.825048325Z" level=info msg="CreateContainer within sandbox \"115f43ad54ea1be4448bd3b6241a615debf03a65a23e6d792843f4bb3c813def\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 5 23:51:48.853980 containerd[1484]: time="2025-09-05T23:51:48.853829645Z" level=info msg="CreateContainer within sandbox \"115f43ad54ea1be4448bd3b6241a615debf03a65a23e6d792843f4bb3c813def\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d6a0ef29690e812d4d307a5e93462f623f6b4ee6e059bca0695b0703dbcc9141\"" Sep 5 23:51:48.855538 containerd[1484]: time="2025-09-05T23:51:48.855506000Z" level=info msg="StartContainer for \"d6a0ef29690e812d4d307a5e93462f623f6b4ee6e059bca0695b0703dbcc9141\"" Sep 5 23:51:48.894562 systemd[1]: Started cri-containerd-d6a0ef29690e812d4d307a5e93462f623f6b4ee6e059bca0695b0703dbcc9141.scope - libcontainer container d6a0ef29690e812d4d307a5e93462f623f6b4ee6e059bca0695b0703dbcc9141. Sep 5 23:51:48.924515 containerd[1484]: time="2025-09-05T23:51:48.924387529Z" level=info msg="StartContainer for \"d6a0ef29690e812d4d307a5e93462f623f6b4ee6e059bca0695b0703dbcc9141\" returns successfully" Sep 5 23:51:48.942380 systemd[1]: cri-containerd-d6a0ef29690e812d4d307a5e93462f623f6b4ee6e059bca0695b0703dbcc9141.scope: Deactivated successfully. Sep 5 23:51:49.094497 containerd[1484]: time="2025-09-05T23:51:49.093915229Z" level=info msg="shim disconnected" id=d6a0ef29690e812d4d307a5e93462f623f6b4ee6e059bca0695b0703dbcc9141 namespace=k8s.io Sep 5 23:51:49.094497 containerd[1484]: time="2025-09-05T23:51:49.094068028Z" level=warning msg="cleaning up after shim disconnected" id=d6a0ef29690e812d4d307a5e93462f623f6b4ee6e059bca0695b0703dbcc9141 namespace=k8s.io Sep 5 23:51:49.094497 containerd[1484]: time="2025-09-05T23:51:49.094089548Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:51:49.225282 containerd[1484]: time="2025-09-05T23:51:49.225132960Z" level=info msg="CreateContainer within sandbox \"115f43ad54ea1be4448bd3b6241a615debf03a65a23e6d792843f4bb3c813def\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 5 23:51:49.309549 containerd[1484]: time="2025-09-05T23:51:49.309418855Z" level=info msg="CreateContainer within sandbox \"115f43ad54ea1be4448bd3b6241a615debf03a65a23e6d792843f4bb3c813def\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f7aef4c721fa6469e4c9d1eb09b49d4e8000f0546d27307d15da40890d8a4c03\"" Sep 5 23:51:49.310488 containerd[1484]: time="2025-09-05T23:51:49.310425733Z" level=info msg="StartContainer for \"f7aef4c721fa6469e4c9d1eb09b49d4e8000f0546d27307d15da40890d8a4c03\"" Sep 5 23:51:49.341575 systemd[1]: Started cri-containerd-f7aef4c721fa6469e4c9d1eb09b49d4e8000f0546d27307d15da40890d8a4c03.scope - libcontainer container f7aef4c721fa6469e4c9d1eb09b49d4e8000f0546d27307d15da40890d8a4c03. Sep 5 23:51:49.373399 containerd[1484]: time="2025-09-05T23:51:49.373280885Z" level=info msg="StartContainer for \"f7aef4c721fa6469e4c9d1eb09b49d4e8000f0546d27307d15da40890d8a4c03\" returns successfully" Sep 5 23:51:49.384453 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 23:51:49.384668 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 23:51:49.384738 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 5 23:51:49.392719 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 23:51:49.393000 systemd[1]: cri-containerd-f7aef4c721fa6469e4c9d1eb09b49d4e8000f0546d27307d15da40890d8a4c03.scope: Deactivated successfully. Sep 5 23:51:49.415161 containerd[1484]: time="2025-09-05T23:51:49.415072534Z" level=info msg="shim disconnected" id=f7aef4c721fa6469e4c9d1eb09b49d4e8000f0546d27307d15da40890d8a4c03 namespace=k8s.io Sep 5 23:51:49.415419 containerd[1484]: time="2025-09-05T23:51:49.415397973Z" level=warning msg="cleaning up after shim disconnected" id=f7aef4c721fa6469e4c9d1eb09b49d4e8000f0546d27307d15da40890d8a4c03 namespace=k8s.io Sep 5 23:51:49.415489 containerd[1484]: time="2025-09-05T23:51:49.415472253Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:51:49.419981 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 23:51:49.663478 systemd[1]: Started sshd@9-128.140.56.156:22-216.96.33.226:36563.service - OpenSSH per-connection server daemon (216.96.33.226:36563). Sep 5 23:51:49.810111 sshd[3127]: Connection closed by 216.96.33.226 port 36563 [preauth] Sep 5 23:51:49.812454 systemd[1]: sshd@9-128.140.56.156:22-216.96.33.226:36563.service: Deactivated successfully. Sep 5 23:51:49.836786 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6a0ef29690e812d4d307a5e93462f623f6b4ee6e059bca0695b0703dbcc9141-rootfs.mount: Deactivated successfully. Sep 5 23:51:50.204981 containerd[1484]: time="2025-09-05T23:51:50.204220496Z" level=info msg="CreateContainer within sandbox \"115f43ad54ea1be4448bd3b6241a615debf03a65a23e6d792843f4bb3c813def\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 5 23:51:50.260095 containerd[1484]: time="2025-09-05T23:51:50.259955754Z" level=info msg="CreateContainer within sandbox \"115f43ad54ea1be4448bd3b6241a615debf03a65a23e6d792843f4bb3c813def\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9673677d76a3bf59076aacfabbc4d15b64c3f7bb8c47a87f7b7d08d04484d79a\"" Sep 5 23:51:50.263145 containerd[1484]: time="2025-09-05T23:51:50.261701669Z" level=info msg="StartContainer for \"9673677d76a3bf59076aacfabbc4d15b64c3f7bb8c47a87f7b7d08d04484d79a\"" Sep 5 23:51:50.301369 systemd[1]: Started cri-containerd-9673677d76a3bf59076aacfabbc4d15b64c3f7bb8c47a87f7b7d08d04484d79a.scope - libcontainer container 9673677d76a3bf59076aacfabbc4d15b64c3f7bb8c47a87f7b7d08d04484d79a. Sep 5 23:51:50.335464 containerd[1484]: time="2025-09-05T23:51:50.335405841Z" level=info msg="StartContainer for \"9673677d76a3bf59076aacfabbc4d15b64c3f7bb8c47a87f7b7d08d04484d79a\" returns successfully" Sep 5 23:51:50.339916 systemd[1]: cri-containerd-9673677d76a3bf59076aacfabbc4d15b64c3f7bb8c47a87f7b7d08d04484d79a.scope: Deactivated successfully. Sep 5 23:51:50.365318 containerd[1484]: time="2025-09-05T23:51:50.365251605Z" level=info msg="shim disconnected" id=9673677d76a3bf59076aacfabbc4d15b64c3f7bb8c47a87f7b7d08d04484d79a namespace=k8s.io Sep 5 23:51:50.365318 containerd[1484]: time="2025-09-05T23:51:50.365312925Z" level=warning msg="cleaning up after shim disconnected" id=9673677d76a3bf59076aacfabbc4d15b64c3f7bb8c47a87f7b7d08d04484d79a namespace=k8s.io Sep 5 23:51:50.365572 containerd[1484]: time="2025-09-05T23:51:50.365389485Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:51:50.839732 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9673677d76a3bf59076aacfabbc4d15b64c3f7bb8c47a87f7b7d08d04484d79a-rootfs.mount: Deactivated successfully. Sep 5 23:51:51.220407 containerd[1484]: time="2025-09-05T23:51:51.220089726Z" level=info msg="CreateContainer within sandbox \"115f43ad54ea1be4448bd3b6241a615debf03a65a23e6d792843f4bb3c813def\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 5 23:51:51.272314 containerd[1484]: time="2025-09-05T23:51:51.272267639Z" level=info msg="CreateContainer within sandbox \"115f43ad54ea1be4448bd3b6241a615debf03a65a23e6d792843f4bb3c813def\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5b6de473e00aeefcf8e1fbbf3058772e84bd31c60f29915a4c71ef82cd498a73\"" Sep 5 23:51:51.274618 containerd[1484]: time="2025-09-05T23:51:51.273775035Z" level=info msg="StartContainer for \"5b6de473e00aeefcf8e1fbbf3058772e84bd31c60f29915a4c71ef82cd498a73\"" Sep 5 23:51:51.309497 systemd[1]: Started cri-containerd-5b6de473e00aeefcf8e1fbbf3058772e84bd31c60f29915a4c71ef82cd498a73.scope - libcontainer container 5b6de473e00aeefcf8e1fbbf3058772e84bd31c60f29915a4c71ef82cd498a73. Sep 5 23:51:51.336201 systemd[1]: cri-containerd-5b6de473e00aeefcf8e1fbbf3058772e84bd31c60f29915a4c71ef82cd498a73.scope: Deactivated successfully. Sep 5 23:51:51.342485 containerd[1484]: time="2025-09-05T23:51:51.341881028Z" level=info msg="StartContainer for \"5b6de473e00aeefcf8e1fbbf3058772e84bd31c60f29915a4c71ef82cd498a73\" returns successfully" Sep 5 23:51:51.372503 containerd[1484]: time="2025-09-05T23:51:51.372257274Z" level=info msg="shim disconnected" id=5b6de473e00aeefcf8e1fbbf3058772e84bd31c60f29915a4c71ef82cd498a73 namespace=k8s.io Sep 5 23:51:51.372503 containerd[1484]: time="2025-09-05T23:51:51.372319994Z" level=warning msg="cleaning up after shim disconnected" id=5b6de473e00aeefcf8e1fbbf3058772e84bd31c60f29915a4c71ef82cd498a73 namespace=k8s.io Sep 5 23:51:51.372503 containerd[1484]: time="2025-09-05T23:51:51.372328953Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:51:51.840992 systemd[1]: run-containerd-runc-k8s.io-5b6de473e00aeefcf8e1fbbf3058772e84bd31c60f29915a4c71ef82cd498a73-runc.NLu5s2.mount: Deactivated successfully. Sep 5 23:51:51.841355 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b6de473e00aeefcf8e1fbbf3058772e84bd31c60f29915a4c71ef82cd498a73-rootfs.mount: Deactivated successfully. Sep 5 23:51:51.868468 systemd[1]: Started sshd@10-128.140.56.156:22-103.99.206.83:43152.service - OpenSSH per-connection server daemon (103.99.206.83:43152). Sep 5 23:51:52.214112 containerd[1484]: time="2025-09-05T23:51:52.213647154Z" level=info msg="CreateContainer within sandbox \"115f43ad54ea1be4448bd3b6241a615debf03a65a23e6d792843f4bb3c813def\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 5 23:51:52.217441 sshd[3249]: Connection closed by 103.99.206.83 port 43152 [preauth] Sep 5 23:51:52.219735 systemd[1]: sshd@10-128.140.56.156:22-103.99.206.83:43152.service: Deactivated successfully. Sep 5 23:51:52.243426 containerd[1484]: time="2025-09-05T23:51:52.243359644Z" level=info msg="CreateContainer within sandbox \"115f43ad54ea1be4448bd3b6241a615debf03a65a23e6d792843f4bb3c813def\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"92fb97589bfc19350a0b7774798d182b4ec76e40dd2d03f28a98eb0c79f96046\"" Sep 5 23:51:52.245854 containerd[1484]: time="2025-09-05T23:51:52.244757520Z" level=info msg="StartContainer for \"92fb97589bfc19350a0b7774798d182b4ec76e40dd2d03f28a98eb0c79f96046\"" Sep 5 23:51:52.278132 systemd[1]: Started cri-containerd-92fb97589bfc19350a0b7774798d182b4ec76e40dd2d03f28a98eb0c79f96046.scope - libcontainer container 92fb97589bfc19350a0b7774798d182b4ec76e40dd2d03f28a98eb0c79f96046. Sep 5 23:51:52.309158 containerd[1484]: time="2025-09-05T23:51:52.308572250Z" level=info msg="StartContainer for \"92fb97589bfc19350a0b7774798d182b4ec76e40dd2d03f28a98eb0c79f96046\" returns successfully" Sep 5 23:51:52.490009 kubelet[2596]: I0905 23:51:52.489881 2596 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 5 23:51:52.550288 systemd[1]: Created slice kubepods-burstable-poddad5763e_e568_42a6_8651_7be9e4a586da.slice - libcontainer container kubepods-burstable-poddad5763e_e568_42a6_8651_7be9e4a586da.slice. Sep 5 23:51:52.558630 kubelet[2596]: I0905 23:51:52.557273 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dad5763e-e568-42a6-8651-7be9e4a586da-config-volume\") pod \"coredns-674b8bbfcf-mlszr\" (UID: \"dad5763e-e568-42a6-8651-7be9e4a586da\") " pod="kube-system/coredns-674b8bbfcf-mlszr" Sep 5 23:51:52.558630 kubelet[2596]: I0905 23:51:52.557317 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j54cs\" (UniqueName: \"kubernetes.io/projected/dad5763e-e568-42a6-8651-7be9e4a586da-kube-api-access-j54cs\") pod \"coredns-674b8bbfcf-mlszr\" (UID: \"dad5763e-e568-42a6-8651-7be9e4a586da\") " pod="kube-system/coredns-674b8bbfcf-mlszr" Sep 5 23:51:52.558112 systemd[1]: Created slice kubepods-burstable-pod0c9e6f2c_f340_46fe_8152_b6ad17a2ce6c.slice - libcontainer container kubepods-burstable-pod0c9e6f2c_f340_46fe_8152_b6ad17a2ce6c.slice. Sep 5 23:51:52.659211 kubelet[2596]: I0905 23:51:52.659083 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcssq\" (UniqueName: \"kubernetes.io/projected/0c9e6f2c-f340-46fe-8152-b6ad17a2ce6c-kube-api-access-fcssq\") pod \"coredns-674b8bbfcf-gd47t\" (UID: \"0c9e6f2c-f340-46fe-8152-b6ad17a2ce6c\") " pod="kube-system/coredns-674b8bbfcf-gd47t" Sep 5 23:51:52.659211 kubelet[2596]: I0905 23:51:52.659155 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c9e6f2c-f340-46fe-8152-b6ad17a2ce6c-config-volume\") pod \"coredns-674b8bbfcf-gd47t\" (UID: \"0c9e6f2c-f340-46fe-8152-b6ad17a2ce6c\") " pod="kube-system/coredns-674b8bbfcf-gd47t" Sep 5 23:51:52.855507 containerd[1484]: time="2025-09-05T23:51:52.855362404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mlszr,Uid:dad5763e-e568-42a6-8651-7be9e4a586da,Namespace:kube-system,Attempt:0,}" Sep 5 23:51:52.863240 containerd[1484]: time="2025-09-05T23:51:52.862820746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gd47t,Uid:0c9e6f2c-f340-46fe-8152-b6ad17a2ce6c,Namespace:kube-system,Attempt:0,}" Sep 5 23:51:53.239534 kubelet[2596]: I0905 23:51:53.239098 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zl74v" podStartSLOduration=4.922388764 podStartE2EDuration="16.239074803s" podCreationTimestamp="2025-09-05 23:51:37 +0000 UTC" firstStartedPulling="2025-09-05 23:51:37.502918461 +0000 UTC m=+7.524769702" lastFinishedPulling="2025-09-05 23:51:48.81960442 +0000 UTC m=+18.841455741" observedRunningTime="2025-09-05 23:51:53.236178329 +0000 UTC m=+23.258029610" watchObservedRunningTime="2025-09-05 23:51:53.239074803 +0000 UTC m=+23.260926044" Sep 5 23:51:54.154351 containerd[1484]: time="2025-09-05T23:51:54.153439787Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:54.154351 containerd[1484]: time="2025-09-05T23:51:54.154305985Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 5 23:51:54.155386 containerd[1484]: time="2025-09-05T23:51:54.155319383Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:51:54.157146 containerd[1484]: time="2025-09-05T23:51:54.156505660Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 5.335415764s" Sep 5 23:51:54.157146 containerd[1484]: time="2025-09-05T23:51:54.156550540Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 5 23:51:54.164363 containerd[1484]: time="2025-09-05T23:51:54.164194604Z" level=info msg="CreateContainer within sandbox \"cfc20d51423d4043ff71af19f3ec4794ea74c641568b1b108be7d498427247f1\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 5 23:51:54.177870 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2178243774.mount: Deactivated successfully. Sep 5 23:51:54.184464 containerd[1484]: time="2025-09-05T23:51:54.182456364Z" level=info msg="CreateContainer within sandbox \"cfc20d51423d4043ff71af19f3ec4794ea74c641568b1b108be7d498427247f1\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b50fe9e232a2c4952cc0bb7e59355facec5aa9935d869db609eceaa68f64d9fe\"" Sep 5 23:51:54.185150 containerd[1484]: time="2025-09-05T23:51:54.184795439Z" level=info msg="StartContainer for \"b50fe9e232a2c4952cc0bb7e59355facec5aa9935d869db609eceaa68f64d9fe\"" Sep 5 23:51:54.220335 systemd[1]: Started cri-containerd-b50fe9e232a2c4952cc0bb7e59355facec5aa9935d869db609eceaa68f64d9fe.scope - libcontainer container b50fe9e232a2c4952cc0bb7e59355facec5aa9935d869db609eceaa68f64d9fe. Sep 5 23:51:54.245872 containerd[1484]: time="2025-09-05T23:51:54.245829946Z" level=info msg="StartContainer for \"b50fe9e232a2c4952cc0bb7e59355facec5aa9935d869db609eceaa68f64d9fe\" returns successfully" Sep 5 23:51:55.239791 kubelet[2596]: I0905 23:51:55.239686 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-kf2k7" podStartSLOduration=2.031437898 podStartE2EDuration="18.239669801s" podCreationTimestamp="2025-09-05 23:51:37 +0000 UTC" firstStartedPulling="2025-09-05 23:51:37.950210633 +0000 UTC m=+7.972061874" lastFinishedPulling="2025-09-05 23:51:54.158442576 +0000 UTC m=+24.180293777" observedRunningTime="2025-09-05 23:51:55.238530883 +0000 UTC m=+25.260382124" watchObservedRunningTime="2025-09-05 23:51:55.239669801 +0000 UTC m=+25.261521042" Sep 5 23:51:58.359796 systemd-networkd[1375]: cilium_host: Link UP Sep 5 23:51:58.360038 systemd-networkd[1375]: cilium_net: Link UP Sep 5 23:51:58.360197 systemd-networkd[1375]: cilium_net: Gained carrier Sep 5 23:51:58.360324 systemd-networkd[1375]: cilium_host: Gained carrier Sep 5 23:51:58.485091 systemd-networkd[1375]: cilium_vxlan: Link UP Sep 5 23:51:58.485098 systemd-networkd[1375]: cilium_vxlan: Gained carrier Sep 5 23:51:58.668711 systemd-networkd[1375]: cilium_net: Gained IPv6LL Sep 5 23:51:58.781205 kernel: NET: Registered PF_ALG protocol family Sep 5 23:51:58.780882 systemd-networkd[1375]: cilium_host: Gained IPv6LL Sep 5 23:51:59.540447 systemd-networkd[1375]: lxc_health: Link UP Sep 5 23:51:59.556434 systemd-networkd[1375]: lxc_health: Gained carrier Sep 5 23:51:59.900329 systemd-networkd[1375]: cilium_vxlan: Gained IPv6LL Sep 5 23:51:59.975520 systemd-networkd[1375]: lxc1db3b8c55578: Link UP Sep 5 23:51:59.977190 kernel: eth0: renamed from tmpb6cec Sep 5 23:51:59.983583 systemd-networkd[1375]: lxc6c66d98b680f: Link UP Sep 5 23:51:59.985319 kernel: eth0: renamed from tmp0d4cd Sep 5 23:51:59.988352 systemd-networkd[1375]: lxc6c66d98b680f: Gained carrier Sep 5 23:51:59.990230 systemd-networkd[1375]: lxc1db3b8c55578: Gained carrier Sep 5 23:52:01.244416 systemd-networkd[1375]: lxc_health: Gained IPv6LL Sep 5 23:52:01.436350 systemd-networkd[1375]: lxc6c66d98b680f: Gained IPv6LL Sep 5 23:52:01.756907 systemd-networkd[1375]: lxc1db3b8c55578: Gained IPv6LL Sep 5 23:52:04.243030 containerd[1484]: time="2025-09-05T23:52:04.242588641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:52:04.243030 containerd[1484]: time="2025-09-05T23:52:04.242745841Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:52:04.243030 containerd[1484]: time="2025-09-05T23:52:04.242777281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:52:04.243030 containerd[1484]: time="2025-09-05T23:52:04.242885921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:52:04.254442 containerd[1484]: time="2025-09-05T23:52:04.253983503Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:52:04.254796 containerd[1484]: time="2025-09-05T23:52:04.254090423Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:52:04.254796 containerd[1484]: time="2025-09-05T23:52:04.254763062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:52:04.256306 containerd[1484]: time="2025-09-05T23:52:04.255401021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:52:04.292350 systemd[1]: Started cri-containerd-b6cec9422e4d22dc46bb5207ca43a29bf6230fbc1f135ddf4f0d0d4fd299194e.scope - libcontainer container b6cec9422e4d22dc46bb5207ca43a29bf6230fbc1f135ddf4f0d0d4fd299194e. Sep 5 23:52:04.297293 systemd[1]: Started cri-containerd-0d4cd1c6260f98bebbe8018a8c31ca1929216f6117c11084ba7fbf1b4eac9a23.scope - libcontainer container 0d4cd1c6260f98bebbe8018a8c31ca1929216f6117c11084ba7fbf1b4eac9a23. Sep 5 23:52:04.344090 containerd[1484]: time="2025-09-05T23:52:04.344020361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mlszr,Uid:dad5763e-e568-42a6-8651-7be9e4a586da,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d4cd1c6260f98bebbe8018a8c31ca1929216f6117c11084ba7fbf1b4eac9a23\"" Sep 5 23:52:04.352823 containerd[1484]: time="2025-09-05T23:52:04.352670708Z" level=info msg="CreateContainer within sandbox \"0d4cd1c6260f98bebbe8018a8c31ca1929216f6117c11084ba7fbf1b4eac9a23\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 23:52:04.383794 containerd[1484]: time="2025-09-05T23:52:04.383651139Z" level=info msg="CreateContainer within sandbox \"0d4cd1c6260f98bebbe8018a8c31ca1929216f6117c11084ba7fbf1b4eac9a23\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a5e530d766c6521fe7dcf6b49e9dc41632fccdf4cdbb8ed7f48a9ac8ea6d6a0a\"" Sep 5 23:52:04.385560 containerd[1484]: time="2025-09-05T23:52:04.384893217Z" level=info msg="StartContainer for \"a5e530d766c6521fe7dcf6b49e9dc41632fccdf4cdbb8ed7f48a9ac8ea6d6a0a\"" Sep 5 23:52:04.391454 containerd[1484]: time="2025-09-05T23:52:04.391403367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gd47t,Uid:0c9e6f2c-f340-46fe-8152-b6ad17a2ce6c,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6cec9422e4d22dc46bb5207ca43a29bf6230fbc1f135ddf4f0d0d4fd299194e\"" Sep 5 23:52:04.399369 containerd[1484]: time="2025-09-05T23:52:04.399313954Z" level=info msg="CreateContainer within sandbox \"b6cec9422e4d22dc46bb5207ca43a29bf6230fbc1f135ddf4f0d0d4fd299194e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 23:52:04.424144 containerd[1484]: time="2025-09-05T23:52:04.423832556Z" level=info msg="CreateContainer within sandbox \"b6cec9422e4d22dc46bb5207ca43a29bf6230fbc1f135ddf4f0d0d4fd299194e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e4b455d903b7b5bf627a4e4955e93ae08d24b08403de3bfbd4656bb620bfb678\"" Sep 5 23:52:04.427492 containerd[1484]: time="2025-09-05T23:52:04.427444590Z" level=info msg="StartContainer for \"e4b455d903b7b5bf627a4e4955e93ae08d24b08403de3bfbd4656bb620bfb678\"" Sep 5 23:52:04.436642 systemd[1]: Started cri-containerd-a5e530d766c6521fe7dcf6b49e9dc41632fccdf4cdbb8ed7f48a9ac8ea6d6a0a.scope - libcontainer container a5e530d766c6521fe7dcf6b49e9dc41632fccdf4cdbb8ed7f48a9ac8ea6d6a0a. Sep 5 23:52:04.474366 systemd[1]: Started cri-containerd-e4b455d903b7b5bf627a4e4955e93ae08d24b08403de3bfbd4656bb620bfb678.scope - libcontainer container e4b455d903b7b5bf627a4e4955e93ae08d24b08403de3bfbd4656bb620bfb678. Sep 5 23:52:04.501579 containerd[1484]: time="2025-09-05T23:52:04.501267874Z" level=info msg="StartContainer for \"a5e530d766c6521fe7dcf6b49e9dc41632fccdf4cdbb8ed7f48a9ac8ea6d6a0a\" returns successfully" Sep 5 23:52:04.522348 containerd[1484]: time="2025-09-05T23:52:04.522278201Z" level=info msg="StartContainer for \"e4b455d903b7b5bf627a4e4955e93ae08d24b08403de3bfbd4656bb620bfb678\" returns successfully" Sep 5 23:52:05.258814 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2085808970.mount: Deactivated successfully. Sep 5 23:52:05.280296 kubelet[2596]: I0905 23:52:05.280153 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-mlszr" podStartSLOduration=28.28009762 podStartE2EDuration="28.28009762s" podCreationTimestamp="2025-09-05 23:51:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:52:05.27992354 +0000 UTC m=+35.301774821" watchObservedRunningTime="2025-09-05 23:52:05.28009762 +0000 UTC m=+35.301948861" Sep 5 23:52:05.299192 kubelet[2596]: I0905 23:52:05.299100 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-gd47t" podStartSLOduration=28.299086271 podStartE2EDuration="28.299086271s" podCreationTimestamp="2025-09-05 23:51:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:52:05.298825431 +0000 UTC m=+35.320676712" watchObservedRunningTime="2025-09-05 23:52:05.299086271 +0000 UTC m=+35.320937472" Sep 5 23:52:32.574816 kernel: hrtimer: interrupt took 2310128 ns Sep 5 23:52:59.211425 systemd[1]: sshd@7-128.140.56.156:22-222.79.105.211:47466.service: Deactivated successfully. Sep 5 23:52:59.682523 systemd[1]: Started sshd@11-128.140.56.156:22-222.79.105.211:37234.service - OpenSSH per-connection server daemon (222.79.105.211:37234). Sep 5 23:53:02.335663 systemd[1]: Started sshd@12-128.140.56.156:22-103.99.206.83:37478.service - OpenSSH per-connection server daemon (103.99.206.83:37478). Sep 5 23:53:02.672702 sshd[4005]: Connection closed by 103.99.206.83 port 37478 [preauth] Sep 5 23:53:02.674268 systemd[1]: sshd@12-128.140.56.156:22-103.99.206.83:37478.service: Deactivated successfully. Sep 5 23:53:05.772175 sshd[4003]: Connection closed by authenticating user root 222.79.105.211 port 37234 [preauth] Sep 5 23:53:05.774220 systemd[1]: sshd@11-128.140.56.156:22-222.79.105.211:37234.service: Deactivated successfully. Sep 5 23:53:06.011633 systemd[1]: Started sshd@13-128.140.56.156:22-222.79.105.211:41030.service - OpenSSH per-connection server daemon (222.79.105.211:41030). Sep 5 23:53:06.864888 update_engine[1461]: I20250905 23:53:06.858381 1461 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 5 23:53:06.864888 update_engine[1461]: I20250905 23:53:06.858457 1461 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 5 23:53:06.864888 update_engine[1461]: I20250905 23:53:06.858713 1461 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 5 23:53:06.864888 update_engine[1461]: I20250905 23:53:06.861186 1461 omaha_request_params.cc:62] Current group set to lts Sep 5 23:53:06.864888 update_engine[1461]: I20250905 23:53:06.861318 1461 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 5 23:53:06.864888 update_engine[1461]: I20250905 23:53:06.861330 1461 update_attempter.cc:643] Scheduling an action processor start. Sep 5 23:53:06.864888 update_engine[1461]: I20250905 23:53:06.861352 1461 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 5 23:53:06.864888 update_engine[1461]: I20250905 23:53:06.861392 1461 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 5 23:53:06.864888 update_engine[1461]: I20250905 23:53:06.861716 1461 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 5 23:53:06.864888 update_engine[1461]: I20250905 23:53:06.861737 1461 omaha_request_action.cc:272] Request: Sep 5 23:53:06.864888 update_engine[1461]: Sep 5 23:53:06.864888 update_engine[1461]: Sep 5 23:53:06.864888 update_engine[1461]: Sep 5 23:53:06.864888 update_engine[1461]: Sep 5 23:53:06.864888 update_engine[1461]: Sep 5 23:53:06.864888 update_engine[1461]: Sep 5 23:53:06.864888 update_engine[1461]: Sep 5 23:53:06.864888 update_engine[1461]: Sep 5 23:53:06.864888 update_engine[1461]: I20250905 23:53:06.861747 1461 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 5 23:53:06.864888 update_engine[1461]: I20250905 23:53:06.864094 1461 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 5 23:53:06.865844 locksmithd[1509]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 5 23:53:06.869380 update_engine[1461]: I20250905 23:53:06.869170 1461 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 5 23:53:06.869922 update_engine[1461]: E20250905 23:53:06.869896 1461 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 5 23:53:06.870053 update_engine[1461]: I20250905 23:53:06.870036 1461 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 5 23:53:07.109853 sshd[4013]: Connection closed by authenticating user root 222.79.105.211 port 41030 [preauth] Sep 5 23:53:07.112316 systemd[1]: sshd@13-128.140.56.156:22-222.79.105.211:41030.service: Deactivated successfully. Sep 5 23:53:07.316601 systemd[1]: Started sshd@14-128.140.56.156:22-222.79.105.211:41038.service - OpenSSH per-connection server daemon (222.79.105.211:41038). Sep 5 23:53:08.648902 sshd[4018]: Connection closed by authenticating user root 222.79.105.211 port 41038 [preauth] Sep 5 23:53:08.651963 systemd[1]: sshd@14-128.140.56.156:22-222.79.105.211:41038.service: Deactivated successfully. Sep 5 23:53:08.879799 systemd[1]: Started sshd@15-128.140.56.156:22-222.79.105.211:41044.service - OpenSSH per-connection server daemon (222.79.105.211:41044). Sep 5 23:53:09.927590 sshd[4025]: Connection closed by authenticating user root 222.79.105.211 port 41044 [preauth] Sep 5 23:53:09.930155 systemd[1]: sshd@15-128.140.56.156:22-222.79.105.211:41044.service: Deactivated successfully. Sep 5 23:53:10.157514 systemd[1]: Started sshd@16-128.140.56.156:22-222.79.105.211:41052.service - OpenSSH per-connection server daemon (222.79.105.211:41052). Sep 5 23:53:11.516850 sshd[4030]: Connection closed by authenticating user root 222.79.105.211 port 41052 [preauth] Sep 5 23:53:11.519332 systemd[1]: sshd@16-128.140.56.156:22-222.79.105.211:41052.service: Deactivated successfully. Sep 5 23:53:11.751643 systemd[1]: Started sshd@17-128.140.56.156:22-222.79.105.211:41064.service - OpenSSH per-connection server daemon (222.79.105.211:41064). Sep 5 23:53:13.063210 sshd[4035]: Connection closed by authenticating user root 222.79.105.211 port 41064 [preauth] Sep 5 23:53:13.067524 systemd[1]: sshd@17-128.140.56.156:22-222.79.105.211:41064.service: Deactivated successfully. Sep 5 23:53:13.285422 systemd[1]: Started sshd@18-128.140.56.156:22-222.79.105.211:41076.service - OpenSSH per-connection server daemon (222.79.105.211:41076). Sep 5 23:53:14.375167 sshd[4040]: Connection closed by authenticating user root 222.79.105.211 port 41076 [preauth] Sep 5 23:53:14.377387 systemd[1]: sshd@18-128.140.56.156:22-222.79.105.211:41076.service: Deactivated successfully. Sep 5 23:53:15.648457 systemd[1]: Started sshd@19-128.140.56.156:22-222.79.105.211:37834.service - OpenSSH per-connection server daemon (222.79.105.211:37834). Sep 5 23:53:16.816948 update_engine[1461]: I20250905 23:53:16.816175 1461 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 5 23:53:16.816948 update_engine[1461]: I20250905 23:53:16.816565 1461 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 5 23:53:16.816948 update_engine[1461]: I20250905 23:53:16.816877 1461 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 5 23:53:16.818430 update_engine[1461]: E20250905 23:53:16.818373 1461 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 5 23:53:16.818727 update_engine[1461]: I20250905 23:53:16.818688 1461 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 5 23:53:17.885732 sshd[4045]: Connection closed by authenticating user root 222.79.105.211 port 37834 [preauth] Sep 5 23:53:17.888708 systemd[1]: sshd@19-128.140.56.156:22-222.79.105.211:37834.service: Deactivated successfully. Sep 5 23:53:18.566673 systemd[1]: Started sshd@20-128.140.56.156:22-222.79.105.211:37846.service - OpenSSH per-connection server daemon (222.79.105.211:37846). Sep 5 23:53:19.617153 sshd[4050]: Connection closed by authenticating user root 222.79.105.211 port 37846 [preauth] Sep 5 23:53:19.619478 systemd[1]: sshd@20-128.140.56.156:22-222.79.105.211:37846.service: Deactivated successfully. Sep 5 23:53:19.806524 systemd[1]: Started sshd@21-128.140.56.156:22-222.79.105.211:37862.service - OpenSSH per-connection server daemon (222.79.105.211:37862). Sep 5 23:53:21.919924 sshd[4055]: Connection closed by authenticating user root 222.79.105.211 port 37862 [preauth] Sep 5 23:53:21.923649 systemd[1]: sshd@21-128.140.56.156:22-222.79.105.211:37862.service: Deactivated successfully. Sep 5 23:53:22.133712 systemd[1]: Started sshd@22-128.140.56.156:22-222.79.105.211:37864.service - OpenSSH per-connection server daemon (222.79.105.211:37864). Sep 5 23:53:25.104997 sshd[4060]: Connection closed by authenticating user root 222.79.105.211 port 37864 [preauth] Sep 5 23:53:25.107632 systemd[1]: sshd@22-128.140.56.156:22-222.79.105.211:37864.service: Deactivated successfully. Sep 5 23:53:26.318453 systemd[1]: Started sshd@23-128.140.56.156:22-222.79.105.211:48906.service - OpenSSH per-connection server daemon (222.79.105.211:48906). Sep 5 23:53:26.813032 update_engine[1461]: I20250905 23:53:26.812237 1461 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 5 23:53:26.813032 update_engine[1461]: I20250905 23:53:26.812579 1461 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 5 23:53:26.813032 update_engine[1461]: I20250905 23:53:26.812898 1461 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 5 23:53:26.814813 update_engine[1461]: E20250905 23:53:26.814471 1461 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 5 23:53:26.814813 update_engine[1461]: I20250905 23:53:26.814579 1461 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 5 23:53:27.865691 sshd[4065]: Connection closed by authenticating user root 222.79.105.211 port 48906 [preauth] Sep 5 23:53:27.869465 systemd[1]: sshd@23-128.140.56.156:22-222.79.105.211:48906.service: Deactivated successfully. Sep 5 23:53:28.099961 systemd[1]: Started sshd@24-128.140.56.156:22-222.79.105.211:48916.service - OpenSSH per-connection server daemon (222.79.105.211:48916). Sep 5 23:53:29.156980 sshd[4070]: Connection closed by authenticating user root 222.79.105.211 port 48916 [preauth] Sep 5 23:53:29.159766 systemd[1]: sshd@24-128.140.56.156:22-222.79.105.211:48916.service: Deactivated successfully. Sep 5 23:53:29.407614 systemd[1]: Started sshd@25-128.140.56.156:22-222.79.105.211:48932.service - OpenSSH per-connection server daemon (222.79.105.211:48932). Sep 5 23:53:32.322190 sshd[4075]: Connection closed by authenticating user root 222.79.105.211 port 48932 [preauth] Sep 5 23:53:32.326008 systemd[1]: sshd@25-128.140.56.156:22-222.79.105.211:48932.service: Deactivated successfully. Sep 5 23:53:32.555576 systemd[1]: Started sshd@26-128.140.56.156:22-222.79.105.211:48934.service - OpenSSH per-connection server daemon (222.79.105.211:48934). Sep 5 23:53:33.672757 sshd[4082]: Connection closed by authenticating user root 222.79.105.211 port 48934 [preauth] Sep 5 23:53:33.675959 systemd[1]: sshd@26-128.140.56.156:22-222.79.105.211:48934.service: Deactivated successfully. Sep 5 23:53:33.910787 systemd[1]: Started sshd@27-128.140.56.156:22-222.79.105.211:43684.service - OpenSSH per-connection server daemon (222.79.105.211:43684). Sep 5 23:53:36.814042 update_engine[1461]: I20250905 23:53:36.813202 1461 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 5 23:53:36.814042 update_engine[1461]: I20250905 23:53:36.813589 1461 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 5 23:53:36.814042 update_engine[1461]: I20250905 23:53:36.813944 1461 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 5 23:53:36.815572 update_engine[1461]: E20250905 23:53:36.815518 1461 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 5 23:53:36.817013 update_engine[1461]: I20250905 23:53:36.815817 1461 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 5 23:53:36.817013 update_engine[1461]: I20250905 23:53:36.815851 1461 omaha_request_action.cc:617] Omaha request response: Sep 5 23:53:36.817013 update_engine[1461]: E20250905 23:53:36.815994 1461 omaha_request_action.cc:636] Omaha request network transfer failed. Sep 5 23:53:36.817013 update_engine[1461]: I20250905 23:53:36.816026 1461 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 5 23:53:36.817013 update_engine[1461]: I20250905 23:53:36.816038 1461 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 5 23:53:36.817013 update_engine[1461]: I20250905 23:53:36.816049 1461 update_attempter.cc:306] Processing Done. Sep 5 23:53:36.817013 update_engine[1461]: E20250905 23:53:36.816073 1461 update_attempter.cc:619] Update failed. Sep 5 23:53:36.817013 update_engine[1461]: I20250905 23:53:36.816087 1461 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 5 23:53:36.817013 update_engine[1461]: I20250905 23:53:36.816098 1461 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 5 23:53:36.817013 update_engine[1461]: I20250905 23:53:36.816111 1461 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 5 23:53:36.817013 update_engine[1461]: I20250905 23:53:36.816286 1461 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 5 23:53:36.817013 update_engine[1461]: I20250905 23:53:36.816329 1461 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 5 23:53:36.817013 update_engine[1461]: I20250905 23:53:36.816347 1461 omaha_request_action.cc:272] Request: Sep 5 23:53:36.817013 update_engine[1461]: Sep 5 23:53:36.817013 update_engine[1461]: Sep 5 23:53:36.817013 update_engine[1461]: Sep 5 23:53:36.818967 update_engine[1461]: Sep 5 23:53:36.818967 update_engine[1461]: Sep 5 23:53:36.818967 update_engine[1461]: Sep 5 23:53:36.818967 update_engine[1461]: I20250905 23:53:36.816360 1461 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 5 23:53:36.818967 update_engine[1461]: I20250905 23:53:36.816638 1461 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 5 23:53:36.818967 update_engine[1461]: I20250905 23:53:36.816949 1461 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 5 23:53:36.818967 update_engine[1461]: E20250905 23:53:36.818709 1461 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 5 23:53:36.818967 update_engine[1461]: I20250905 23:53:36.818817 1461 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 5 23:53:36.818967 update_engine[1461]: I20250905 23:53:36.818836 1461 omaha_request_action.cc:617] Omaha request response: Sep 5 23:53:36.818967 update_engine[1461]: I20250905 23:53:36.818852 1461 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 5 23:53:36.818967 update_engine[1461]: I20250905 23:53:36.818867 1461 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 5 23:53:36.818967 update_engine[1461]: I20250905 23:53:36.818877 1461 update_attempter.cc:306] Processing Done. Sep 5 23:53:36.818967 update_engine[1461]: I20250905 23:53:36.818889 1461 update_attempter.cc:310] Error event sent. Sep 5 23:53:36.818967 update_engine[1461]: I20250905 23:53:36.818908 1461 update_check_scheduler.cc:74] Next update check in 40m15s Sep 5 23:53:36.819401 locksmithd[1509]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 5 23:53:36.819811 locksmithd[1509]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 5 23:53:39.868864 sshd[4087]: Connection closed by authenticating user root 222.79.105.211 port 43684 [preauth] Sep 5 23:53:39.872065 systemd[1]: sshd@27-128.140.56.156:22-222.79.105.211:43684.service: Deactivated successfully. Sep 5 23:53:40.128463 systemd[1]: Started sshd@28-128.140.56.156:22-222.79.105.211:43688.service - OpenSSH per-connection server daemon (222.79.105.211:43688). Sep 5 23:53:42.807422 sshd[4094]: Connection closed by authenticating user root 222.79.105.211 port 43688 [preauth] Sep 5 23:53:42.809682 systemd[1]: sshd@28-128.140.56.156:22-222.79.105.211:43688.service: Deactivated successfully. Sep 5 23:53:43.036526 systemd[1]: Started sshd@29-128.140.56.156:22-222.79.105.211:43700.service - OpenSSH per-connection server daemon (222.79.105.211:43700). Sep 5 23:53:44.747240 sshd[4099]: Connection closed by authenticating user root 222.79.105.211 port 43700 [preauth] Sep 5 23:53:44.751309 systemd[1]: sshd@29-128.140.56.156:22-222.79.105.211:43700.service: Deactivated successfully. Sep 5 23:53:44.946257 systemd[1]: Started sshd@30-128.140.56.156:22-222.79.105.211:37636.service - OpenSSH per-connection server daemon (222.79.105.211:37636). Sep 5 23:53:46.160192 sshd[4104]: Connection closed by authenticating user root 222.79.105.211 port 37636 [preauth] Sep 5 23:53:46.164450 systemd[1]: sshd@30-128.140.56.156:22-222.79.105.211:37636.service: Deactivated successfully. Sep 5 23:53:46.415325 systemd[1]: Started sshd@31-128.140.56.156:22-222.79.105.211:37646.service - OpenSSH per-connection server daemon (222.79.105.211:37646). Sep 5 23:53:47.818050 sshd[4109]: Connection closed by authenticating user root 222.79.105.211 port 37646 [preauth] Sep 5 23:53:47.820516 systemd[1]: sshd@31-128.140.56.156:22-222.79.105.211:37646.service: Deactivated successfully. Sep 5 23:53:48.049283 systemd[1]: Started sshd@32-128.140.56.156:22-222.79.105.211:37660.service - OpenSSH per-connection server daemon (222.79.105.211:37660). Sep 5 23:53:49.216181 sshd[4114]: Connection closed by authenticating user root 222.79.105.211 port 37660 [preauth] Sep 5 23:53:49.219387 systemd[1]: sshd@32-128.140.56.156:22-222.79.105.211:37660.service: Deactivated successfully. Sep 5 23:53:49.455762 systemd[1]: Started sshd@33-128.140.56.156:22-222.79.105.211:37664.service - OpenSSH per-connection server daemon (222.79.105.211:37664). Sep 5 23:53:52.527356 sshd[4119]: Connection closed by authenticating user root 222.79.105.211 port 37664 [preauth] Sep 5 23:53:52.528506 systemd[1]: sshd@33-128.140.56.156:22-222.79.105.211:37664.service: Deactivated successfully. Sep 5 23:53:55.389477 systemd[1]: Started sshd@34-128.140.56.156:22-139.178.68.195:39284.service - OpenSSH per-connection server daemon (139.178.68.195:39284). Sep 5 23:53:56.441529 sshd[4124]: Accepted publickey for core from 139.178.68.195 port 39284 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:53:56.443883 sshd[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:53:56.449943 systemd-logind[1460]: New session 8 of user core. Sep 5 23:53:56.457305 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 5 23:53:57.262951 sshd[4124]: pam_unix(sshd:session): session closed for user core Sep 5 23:53:57.271813 systemd[1]: sshd@34-128.140.56.156:22-139.178.68.195:39284.service: Deactivated successfully. Sep 5 23:53:57.274877 systemd[1]: session-8.scope: Deactivated successfully. Sep 5 23:53:57.277014 systemd-logind[1460]: Session 8 logged out. Waiting for processes to exit. Sep 5 23:53:57.278948 systemd-logind[1460]: Removed session 8. Sep 5 23:54:02.432432 systemd[1]: Started sshd@35-128.140.56.156:22-139.178.68.195:46158.service - OpenSSH per-connection server daemon (139.178.68.195:46158). Sep 5 23:54:03.428200 sshd[4138]: Accepted publickey for core from 139.178.68.195 port 46158 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:54:03.430228 sshd[4138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:03.436918 systemd-logind[1460]: New session 9 of user core. Sep 5 23:54:03.442319 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 5 23:54:04.199849 sshd[4138]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:04.204708 systemd-logind[1460]: Session 9 logged out. Waiting for processes to exit. Sep 5 23:54:04.205478 systemd[1]: sshd@35-128.140.56.156:22-139.178.68.195:46158.service: Deactivated successfully. Sep 5 23:54:04.209158 systemd[1]: session-9.scope: Deactivated successfully. Sep 5 23:54:04.210771 systemd-logind[1460]: Removed session 9. Sep 5 23:54:09.409240 systemd[1]: Started sshd@36-128.140.56.156:22-139.178.68.195:46164.service - OpenSSH per-connection server daemon (139.178.68.195:46164). Sep 5 23:54:10.467144 sshd[4154]: Accepted publickey for core from 139.178.68.195 port 46164 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:54:10.470321 sshd[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:10.483043 systemd-logind[1460]: New session 10 of user core. Sep 5 23:54:10.490483 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 5 23:54:11.290544 sshd[4154]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:11.296555 systemd[1]: sshd@36-128.140.56.156:22-139.178.68.195:46164.service: Deactivated successfully. Sep 5 23:54:11.299318 systemd[1]: session-10.scope: Deactivated successfully. Sep 5 23:54:11.300369 systemd-logind[1460]: Session 10 logged out. Waiting for processes to exit. Sep 5 23:54:11.302075 systemd-logind[1460]: Removed session 10. Sep 5 23:54:12.604568 systemd[1]: Started sshd@37-128.140.56.156:22-103.99.206.83:60022.service - OpenSSH per-connection server daemon (103.99.206.83:60022). Sep 5 23:54:12.957864 sshd[4168]: Connection closed by 103.99.206.83 port 60022 [preauth] Sep 5 23:54:12.960070 systemd[1]: sshd@37-128.140.56.156:22-103.99.206.83:60022.service: Deactivated successfully. Sep 5 23:54:16.483200 systemd[1]: Started sshd@38-128.140.56.156:22-139.178.68.195:35646.service - OpenSSH per-connection server daemon (139.178.68.195:35646). Sep 5 23:54:17.533211 sshd[4173]: Accepted publickey for core from 139.178.68.195 port 35646 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:54:17.535496 sshd[4173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:17.540230 systemd-logind[1460]: New session 11 of user core. Sep 5 23:54:17.551897 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 5 23:54:18.334150 sshd[4173]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:18.338075 systemd[1]: sshd@38-128.140.56.156:22-139.178.68.195:35646.service: Deactivated successfully. Sep 5 23:54:18.340110 systemd[1]: session-11.scope: Deactivated successfully. Sep 5 23:54:18.341621 systemd-logind[1460]: Session 11 logged out. Waiting for processes to exit. Sep 5 23:54:18.342947 systemd-logind[1460]: Removed session 11. Sep 5 23:54:23.507619 systemd[1]: Started sshd@39-128.140.56.156:22-139.178.68.195:40120.service - OpenSSH per-connection server daemon (139.178.68.195:40120). Sep 5 23:54:24.573817 sshd[4187]: Accepted publickey for core from 139.178.68.195 port 40120 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:54:24.575601 sshd[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:24.580374 systemd-logind[1460]: New session 12 of user core. Sep 5 23:54:24.588349 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 5 23:54:25.376039 sshd[4187]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:25.380395 systemd-logind[1460]: Session 12 logged out. Waiting for processes to exit. Sep 5 23:54:25.380916 systemd[1]: sshd@39-128.140.56.156:22-139.178.68.195:40120.service: Deactivated successfully. Sep 5 23:54:25.383744 systemd[1]: session-12.scope: Deactivated successfully. Sep 5 23:54:25.387032 systemd-logind[1460]: Removed session 12. Sep 5 23:54:25.556672 systemd[1]: Started sshd@40-128.140.56.156:22-139.178.68.195:40132.service - OpenSSH per-connection server daemon (139.178.68.195:40132). Sep 5 23:54:26.555345 sshd[4201]: Accepted publickey for core from 139.178.68.195 port 40132 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:54:26.557909 sshd[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:26.567940 systemd-logind[1460]: New session 13 of user core. Sep 5 23:54:26.576827 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 5 23:54:27.404483 sshd[4201]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:27.412973 systemd[1]: sshd@40-128.140.56.156:22-139.178.68.195:40132.service: Deactivated successfully. Sep 5 23:54:27.418039 systemd[1]: session-13.scope: Deactivated successfully. Sep 5 23:54:27.420088 systemd-logind[1460]: Session 13 logged out. Waiting for processes to exit. Sep 5 23:54:27.422323 systemd-logind[1460]: Removed session 13. Sep 5 23:54:27.586888 systemd[1]: Started sshd@41-128.140.56.156:22-139.178.68.195:40144.service - OpenSSH per-connection server daemon (139.178.68.195:40144). Sep 5 23:54:28.589076 sshd[4212]: Accepted publickey for core from 139.178.68.195 port 40144 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:54:28.592380 sshd[4212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:28.603069 systemd-logind[1460]: New session 14 of user core. Sep 5 23:54:28.608368 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 5 23:54:29.367622 sshd[4212]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:29.373037 systemd[1]: sshd@41-128.140.56.156:22-139.178.68.195:40144.service: Deactivated successfully. Sep 5 23:54:29.378186 systemd[1]: session-14.scope: Deactivated successfully. Sep 5 23:54:29.383344 systemd-logind[1460]: Session 14 logged out. Waiting for processes to exit. Sep 5 23:54:29.385510 systemd-logind[1460]: Removed session 14. Sep 5 23:54:34.551448 systemd[1]: Started sshd@42-128.140.56.156:22-139.178.68.195:57388.service - OpenSSH per-connection server daemon (139.178.68.195:57388). Sep 5 23:54:35.541367 sshd[4227]: Accepted publickey for core from 139.178.68.195 port 57388 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:54:35.543427 sshd[4227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:35.548204 systemd-logind[1460]: New session 15 of user core. Sep 5 23:54:35.553368 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 5 23:54:36.307561 sshd[4227]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:36.312732 systemd[1]: sshd@42-128.140.56.156:22-139.178.68.195:57388.service: Deactivated successfully. Sep 5 23:54:36.315246 systemd[1]: session-15.scope: Deactivated successfully. Sep 5 23:54:36.316383 systemd-logind[1460]: Session 15 logged out. Waiting for processes to exit. Sep 5 23:54:36.317909 systemd-logind[1460]: Removed session 15. Sep 5 23:54:36.483657 systemd[1]: Started sshd@43-128.140.56.156:22-139.178.68.195:57392.service - OpenSSH per-connection server daemon (139.178.68.195:57392). Sep 5 23:54:37.474324 sshd[4240]: Accepted publickey for core from 139.178.68.195 port 57392 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:54:37.476258 sshd[4240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:37.481049 systemd-logind[1460]: New session 16 of user core. Sep 5 23:54:37.487327 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 5 23:54:38.291727 sshd[4240]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:38.297536 systemd[1]: sshd@43-128.140.56.156:22-139.178.68.195:57392.service: Deactivated successfully. Sep 5 23:54:38.300051 systemd[1]: session-16.scope: Deactivated successfully. Sep 5 23:54:38.302468 systemd-logind[1460]: Session 16 logged out. Waiting for processes to exit. Sep 5 23:54:38.303711 systemd-logind[1460]: Removed session 16. Sep 5 23:54:38.495242 systemd[1]: Started sshd@44-128.140.56.156:22-139.178.68.195:57404.service - OpenSSH per-connection server daemon (139.178.68.195:57404). Sep 5 23:54:39.546756 sshd[4254]: Accepted publickey for core from 139.178.68.195 port 57404 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:54:39.549098 sshd[4254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:39.554626 systemd-logind[1460]: New session 17 of user core. Sep 5 23:54:39.568503 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 5 23:54:40.924193 sshd[4254]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:40.929701 systemd-logind[1460]: Session 17 logged out. Waiting for processes to exit. Sep 5 23:54:40.930088 systemd[1]: sshd@44-128.140.56.156:22-139.178.68.195:57404.service: Deactivated successfully. Sep 5 23:54:40.932769 systemd[1]: session-17.scope: Deactivated successfully. Sep 5 23:54:40.935212 systemd-logind[1460]: Removed session 17. Sep 5 23:54:41.095682 systemd[1]: Started sshd@45-128.140.56.156:22-139.178.68.195:48758.service - OpenSSH per-connection server daemon (139.178.68.195:48758). Sep 5 23:54:42.094654 sshd[4272]: Accepted publickey for core from 139.178.68.195 port 48758 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:54:42.096775 sshd[4272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:42.105774 systemd-logind[1460]: New session 18 of user core. Sep 5 23:54:42.112495 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 5 23:54:43.008210 sshd[4272]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:43.012636 systemd-logind[1460]: Session 18 logged out. Waiting for processes to exit. Sep 5 23:54:43.013353 systemd[1]: sshd@45-128.140.56.156:22-139.178.68.195:48758.service: Deactivated successfully. Sep 5 23:54:43.015824 systemd[1]: session-18.scope: Deactivated successfully. Sep 5 23:54:43.017575 systemd-logind[1460]: Removed session 18. Sep 5 23:54:43.191716 systemd[1]: Started sshd@46-128.140.56.156:22-139.178.68.195:48766.service - OpenSSH per-connection server daemon (139.178.68.195:48766). Sep 5 23:54:44.188810 sshd[4283]: Accepted publickey for core from 139.178.68.195 port 48766 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:54:44.191757 sshd[4283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:44.200177 systemd-logind[1460]: New session 19 of user core. Sep 5 23:54:44.206516 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 5 23:54:44.945342 sshd[4283]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:44.951772 systemd[1]: sshd@46-128.140.56.156:22-139.178.68.195:48766.service: Deactivated successfully. Sep 5 23:54:44.954797 systemd[1]: session-19.scope: Deactivated successfully. Sep 5 23:54:44.956828 systemd-logind[1460]: Session 19 logged out. Waiting for processes to exit. Sep 5 23:54:44.957837 systemd-logind[1460]: Removed session 19. Sep 5 23:54:50.127923 systemd[1]: Started sshd@47-128.140.56.156:22-139.178.68.195:34652.service - OpenSSH per-connection server daemon (139.178.68.195:34652). Sep 5 23:54:51.124387 sshd[4297]: Accepted publickey for core from 139.178.68.195 port 34652 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:54:51.126881 sshd[4297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:51.132545 systemd-logind[1460]: New session 20 of user core. Sep 5 23:54:51.139446 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 5 23:54:51.899356 sshd[4297]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:51.903945 systemd[1]: sshd@47-128.140.56.156:22-139.178.68.195:34652.service: Deactivated successfully. Sep 5 23:54:51.907670 systemd[1]: session-20.scope: Deactivated successfully. Sep 5 23:54:51.910195 systemd-logind[1460]: Session 20 logged out. Waiting for processes to exit. Sep 5 23:54:51.911641 systemd-logind[1460]: Removed session 20. Sep 5 23:54:57.085774 systemd[1]: Started sshd@48-128.140.56.156:22-139.178.68.195:34660.service - OpenSSH per-connection server daemon (139.178.68.195:34660). Sep 5 23:54:58.089304 sshd[4310]: Accepted publickey for core from 139.178.68.195 port 34660 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:54:58.093089 sshd[4310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:58.113569 systemd-logind[1460]: New session 21 of user core. Sep 5 23:54:58.122397 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 5 23:54:58.846513 sshd[4310]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:58.850462 systemd[1]: sshd@48-128.140.56.156:22-139.178.68.195:34660.service: Deactivated successfully. Sep 5 23:54:58.852816 systemd[1]: session-21.scope: Deactivated successfully. Sep 5 23:54:58.855006 systemd-logind[1460]: Session 21 logged out. Waiting for processes to exit. Sep 5 23:54:58.856280 systemd-logind[1460]: Removed session 21. Sep 5 23:54:59.022535 systemd[1]: Started sshd@49-128.140.56.156:22-139.178.68.195:34666.service - OpenSSH per-connection server daemon (139.178.68.195:34666). Sep 5 23:55:00.013394 sshd[4323]: Accepted publickey for core from 139.178.68.195 port 34666 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:55:00.015761 sshd[4323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:00.022998 systemd-logind[1460]: New session 22 of user core. Sep 5 23:55:00.029199 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 5 23:55:02.754895 containerd[1484]: time="2025-09-05T23:55:02.754826282Z" level=info msg="StopContainer for \"b50fe9e232a2c4952cc0bb7e59355facec5aa9935d869db609eceaa68f64d9fe\" with timeout 30 (s)" Sep 5 23:55:02.757079 containerd[1484]: time="2025-09-05T23:55:02.756988343Z" level=info msg="Stop container \"b50fe9e232a2c4952cc0bb7e59355facec5aa9935d869db609eceaa68f64d9fe\" with signal terminated" Sep 5 23:55:02.776918 systemd[1]: run-containerd-runc-k8s.io-92fb97589bfc19350a0b7774798d182b4ec76e40dd2d03f28a98eb0c79f96046-runc.ZdXVUE.mount: Deactivated successfully. Sep 5 23:55:02.797087 containerd[1484]: time="2025-09-05T23:55:02.796854145Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 23:55:02.801050 systemd[1]: cri-containerd-b50fe9e232a2c4952cc0bb7e59355facec5aa9935d869db609eceaa68f64d9fe.scope: Deactivated successfully. Sep 5 23:55:02.813148 containerd[1484]: time="2025-09-05T23:55:02.813048960Z" level=info msg="StopContainer for \"92fb97589bfc19350a0b7774798d182b4ec76e40dd2d03f28a98eb0c79f96046\" with timeout 2 (s)" Sep 5 23:55:02.814358 containerd[1484]: time="2025-09-05T23:55:02.814257314Z" level=info msg="Stop container \"92fb97589bfc19350a0b7774798d182b4ec76e40dd2d03f28a98eb0c79f96046\" with signal terminated" Sep 5 23:55:02.824327 systemd-networkd[1375]: lxc_health: Link DOWN Sep 5 23:55:02.824334 systemd-networkd[1375]: lxc_health: Lost carrier Sep 5 23:55:02.844527 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b50fe9e232a2c4952cc0bb7e59355facec5aa9935d869db609eceaa68f64d9fe-rootfs.mount: Deactivated successfully. Sep 5 23:55:02.845379 systemd[1]: cri-containerd-92fb97589bfc19350a0b7774798d182b4ec76e40dd2d03f28a98eb0c79f96046.scope: Deactivated successfully. Sep 5 23:55:02.846298 systemd[1]: cri-containerd-92fb97589bfc19350a0b7774798d182b4ec76e40dd2d03f28a98eb0c79f96046.scope: Consumed 7.736s CPU time. Sep 5 23:55:02.857725 containerd[1484]: time="2025-09-05T23:55:02.857622975Z" level=info msg="shim disconnected" id=b50fe9e232a2c4952cc0bb7e59355facec5aa9935d869db609eceaa68f64d9fe namespace=k8s.io Sep 5 23:55:02.857914 containerd[1484]: time="2025-09-05T23:55:02.857734658Z" level=warning msg="cleaning up after shim disconnected" id=b50fe9e232a2c4952cc0bb7e59355facec5aa9935d869db609eceaa68f64d9fe namespace=k8s.io Sep 5 23:55:02.857914 containerd[1484]: time="2025-09-05T23:55:02.857746218Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:55:02.868699 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92fb97589bfc19350a0b7774798d182b4ec76e40dd2d03f28a98eb0c79f96046-rootfs.mount: Deactivated successfully. Sep 5 23:55:02.876503 containerd[1484]: time="2025-09-05T23:55:02.876384823Z" level=info msg="shim disconnected" id=92fb97589bfc19350a0b7774798d182b4ec76e40dd2d03f28a98eb0c79f96046 namespace=k8s.io Sep 5 23:55:02.876824 containerd[1484]: time="2025-09-05T23:55:02.876442344Z" level=warning msg="cleaning up after shim disconnected" id=92fb97589bfc19350a0b7774798d182b4ec76e40dd2d03f28a98eb0c79f96046 namespace=k8s.io Sep 5 23:55:02.876824 containerd[1484]: time="2025-09-05T23:55:02.876664951Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:55:02.882856 containerd[1484]: time="2025-09-05T23:55:02.882642319Z" level=info msg="StopContainer for \"b50fe9e232a2c4952cc0bb7e59355facec5aa9935d869db609eceaa68f64d9fe\" returns successfully" Sep 5 23:55:02.886604 containerd[1484]: time="2025-09-05T23:55:02.885929171Z" level=info msg="StopPodSandbox for \"cfc20d51423d4043ff71af19f3ec4794ea74c641568b1b108be7d498427247f1\"" Sep 5 23:55:02.887162 containerd[1484]: time="2025-09-05T23:55:02.886864118Z" level=info msg="Container to stop \"b50fe9e232a2c4952cc0bb7e59355facec5aa9935d869db609eceaa68f64d9fe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 23:55:02.891045 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cfc20d51423d4043ff71af19f3ec4794ea74c641568b1b108be7d498427247f1-shm.mount: Deactivated successfully. Sep 5 23:55:02.904982 containerd[1484]: time="2025-09-05T23:55:02.904838424Z" level=info msg="StopContainer for \"92fb97589bfc19350a0b7774798d182b4ec76e40dd2d03f28a98eb0c79f96046\" returns successfully" Sep 5 23:55:02.905374 systemd[1]: cri-containerd-cfc20d51423d4043ff71af19f3ec4794ea74c641568b1b108be7d498427247f1.scope: Deactivated successfully. Sep 5 23:55:02.907299 containerd[1484]: time="2025-09-05T23:55:02.907263812Z" level=info msg="StopPodSandbox for \"115f43ad54ea1be4448bd3b6241a615debf03a65a23e6d792843f4bb3c813def\"" Sep 5 23:55:02.907502 containerd[1484]: time="2025-09-05T23:55:02.907390215Z" level=info msg="Container to stop \"9673677d76a3bf59076aacfabbc4d15b64c3f7bb8c47a87f7b7d08d04484d79a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 23:55:02.907502 containerd[1484]: time="2025-09-05T23:55:02.907406856Z" level=info msg="Container to stop \"92fb97589bfc19350a0b7774798d182b4ec76e40dd2d03f28a98eb0c79f96046\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 23:55:02.907502 containerd[1484]: time="2025-09-05T23:55:02.907418096Z" level=info msg="Container to stop \"d6a0ef29690e812d4d307a5e93462f623f6b4ee6e059bca0695b0703dbcc9141\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 23:55:02.907502 containerd[1484]: time="2025-09-05T23:55:02.907432057Z" level=info msg="Container to stop \"f7aef4c721fa6469e4c9d1eb09b49d4e8000f0546d27307d15da40890d8a4c03\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 23:55:02.907502 containerd[1484]: time="2025-09-05T23:55:02.907441817Z" level=info msg="Container to stop \"5b6de473e00aeefcf8e1fbbf3058772e84bd31c60f29915a4c71ef82cd498a73\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 23:55:02.918616 systemd[1]: cri-containerd-115f43ad54ea1be4448bd3b6241a615debf03a65a23e6d792843f4bb3c813def.scope: Deactivated successfully. Sep 5 23:55:02.938345 containerd[1484]: time="2025-09-05T23:55:02.938107520Z" level=info msg="shim disconnected" id=cfc20d51423d4043ff71af19f3ec4794ea74c641568b1b108be7d498427247f1 namespace=k8s.io Sep 5 23:55:02.938345 containerd[1484]: time="2025-09-05T23:55:02.938287805Z" level=warning msg="cleaning up after shim disconnected" id=cfc20d51423d4043ff71af19f3ec4794ea74c641568b1b108be7d498427247f1 namespace=k8s.io Sep 5 23:55:02.938345 containerd[1484]: time="2025-09-05T23:55:02.938297125Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:55:02.943923 containerd[1484]: time="2025-09-05T23:55:02.943829721Z" level=info msg="shim disconnected" id=115f43ad54ea1be4448bd3b6241a615debf03a65a23e6d792843f4bb3c813def namespace=k8s.io Sep 5 23:55:02.944253 containerd[1484]: time="2025-09-05T23:55:02.944230932Z" level=warning msg="cleaning up after shim disconnected" id=115f43ad54ea1be4448bd3b6241a615debf03a65a23e6d792843f4bb3c813def namespace=k8s.io Sep 5 23:55:02.946202 containerd[1484]: time="2025-09-05T23:55:02.946161346Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:55:02.958596 containerd[1484]: time="2025-09-05T23:55:02.958545815Z" level=info msg="TearDown network for sandbox \"cfc20d51423d4043ff71af19f3ec4794ea74c641568b1b108be7d498427247f1\" successfully" Sep 5 23:55:02.958596 containerd[1484]: time="2025-09-05T23:55:02.958584496Z" level=info msg="StopPodSandbox for \"cfc20d51423d4043ff71af19f3ec4794ea74c641568b1b108be7d498427247f1\" returns successfully" Sep 5 23:55:02.966851 containerd[1484]: time="2025-09-05T23:55:02.966812928Z" level=info msg="TearDown network for sandbox \"115f43ad54ea1be4448bd3b6241a615debf03a65a23e6d792843f4bb3c813def\" successfully" Sep 5 23:55:02.967542 containerd[1484]: time="2025-09-05T23:55:02.967210699Z" level=info msg="StopPodSandbox for \"115f43ad54ea1be4448bd3b6241a615debf03a65a23e6d792843f4bb3c813def\" returns successfully" Sep 5 23:55:03.062443 kubelet[2596]: I0905 23:55:03.062026 2596 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/43baeec1-51dd-45df-b7ee-b6b3faf1b6bd-cilium-config-path\") pod \"43baeec1-51dd-45df-b7ee-b6b3faf1b6bd\" (UID: \"43baeec1-51dd-45df-b7ee-b6b3faf1b6bd\") " Sep 5 23:55:03.062443 kubelet[2596]: I0905 23:55:03.062098 2596 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xv54b\" (UniqueName: \"kubernetes.io/projected/43baeec1-51dd-45df-b7ee-b6b3faf1b6bd-kube-api-access-xv54b\") pod \"43baeec1-51dd-45df-b7ee-b6b3faf1b6bd\" (UID: \"43baeec1-51dd-45df-b7ee-b6b3faf1b6bd\") " Sep 5 23:55:03.068734 kubelet[2596]: I0905 23:55:03.068592 2596 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43baeec1-51dd-45df-b7ee-b6b3faf1b6bd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "43baeec1-51dd-45df-b7ee-b6b3faf1b6bd" (UID: "43baeec1-51dd-45df-b7ee-b6b3faf1b6bd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 5 23:55:03.069406 kubelet[2596]: I0905 23:55:03.069342 2596 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43baeec1-51dd-45df-b7ee-b6b3faf1b6bd-kube-api-access-xv54b" (OuterVolumeSpecName: "kube-api-access-xv54b") pod "43baeec1-51dd-45df-b7ee-b6b3faf1b6bd" (UID: "43baeec1-51dd-45df-b7ee-b6b3faf1b6bd"). InnerVolumeSpecName "kube-api-access-xv54b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 5 23:55:03.163033 kubelet[2596]: I0905 23:55:03.162903 2596 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ff5e92fd-da71-4009-afe9-0eef1ae950e6-host-proc-sys-net\") pod \"ff5e92fd-da71-4009-afe9-0eef1ae950e6\" (UID: \"ff5e92fd-da71-4009-afe9-0eef1ae950e6\") " Sep 5 23:55:03.163033 kubelet[2596]: I0905 23:55:03.162973 2596 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff5e92fd-da71-4009-afe9-0eef1ae950e6-lib-modules\") pod \"ff5e92fd-da71-4009-afe9-0eef1ae950e6\" (UID: \"ff5e92fd-da71-4009-afe9-0eef1ae950e6\") " Sep 5 23:55:03.163033 kubelet[2596]: I0905 23:55:03.162995 2596 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ff5e92fd-da71-4009-afe9-0eef1ae950e6-cni-path\") pod \"ff5e92fd-da71-4009-afe9-0eef1ae950e6\" (UID: \"ff5e92fd-da71-4009-afe9-0eef1ae950e6\") " Sep 5 23:55:03.163033 kubelet[2596]: I0905 23:55:03.163023 2596 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ff5e92fd-da71-4009-afe9-0eef1ae950e6-hubble-tls\") pod \"ff5e92fd-da71-4009-afe9-0eef1ae950e6\" (UID: \"ff5e92fd-da71-4009-afe9-0eef1ae950e6\") " Sep 5 23:55:03.163033 kubelet[2596]: I0905 23:55:03.163047 2596 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ff5e92fd-da71-4009-afe9-0eef1ae950e6-cilium-run\") pod \"ff5e92fd-da71-4009-afe9-0eef1ae950e6\" (UID: \"ff5e92fd-da71-4009-afe9-0eef1ae950e6\") " Sep 5 23:55:03.163468 kubelet[2596]: I0905 23:55:03.163063 2596 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ff5e92fd-da71-4009-afe9-0eef1ae950e6-host-proc-sys-kernel\") pod \"ff5e92fd-da71-4009-afe9-0eef1ae950e6\" (UID: \"ff5e92fd-da71-4009-afe9-0eef1ae950e6\") " Sep 5 23:55:03.163468 kubelet[2596]: I0905 23:55:03.163082 2596 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ff5e92fd-da71-4009-afe9-0eef1ae950e6-cilium-cgroup\") pod \"ff5e92fd-da71-4009-afe9-0eef1ae950e6\" (UID: \"ff5e92fd-da71-4009-afe9-0eef1ae950e6\") " Sep 5 23:55:03.163468 kubelet[2596]: I0905 23:55:03.163104 2596 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhxl8\" (UniqueName: \"kubernetes.io/projected/ff5e92fd-da71-4009-afe9-0eef1ae950e6-kube-api-access-rhxl8\") pod \"ff5e92fd-da71-4009-afe9-0eef1ae950e6\" (UID: \"ff5e92fd-da71-4009-afe9-0eef1ae950e6\") " Sep 5 23:55:03.163468 kubelet[2596]: I0905 23:55:03.163145 2596 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ff5e92fd-da71-4009-afe9-0eef1ae950e6-hostproc\") pod \"ff5e92fd-da71-4009-afe9-0eef1ae950e6\" (UID: \"ff5e92fd-da71-4009-afe9-0eef1ae950e6\") " Sep 5 23:55:03.163468 kubelet[2596]: I0905 23:55:03.163164 2596 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff5e92fd-da71-4009-afe9-0eef1ae950e6-xtables-lock\") pod \"ff5e92fd-da71-4009-afe9-0eef1ae950e6\" (UID: \"ff5e92fd-da71-4009-afe9-0eef1ae950e6\") " Sep 5 23:55:03.163468 kubelet[2596]: I0905 23:55:03.163185 2596 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ff5e92fd-da71-4009-afe9-0eef1ae950e6-clustermesh-secrets\") pod \"ff5e92fd-da71-4009-afe9-0eef1ae950e6\" (UID: \"ff5e92fd-da71-4009-afe9-0eef1ae950e6\") " Sep 5 23:55:03.163755 kubelet[2596]: I0905 23:55:03.163203 2596 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ff5e92fd-da71-4009-afe9-0eef1ae950e6-etc-cni-netd\") pod \"ff5e92fd-da71-4009-afe9-0eef1ae950e6\" (UID: \"ff5e92fd-da71-4009-afe9-0eef1ae950e6\") " Sep 5 23:55:03.163755 kubelet[2596]: I0905 23:55:03.163227 2596 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ff5e92fd-da71-4009-afe9-0eef1ae950e6-cilium-config-path\") pod \"ff5e92fd-da71-4009-afe9-0eef1ae950e6\" (UID: \"ff5e92fd-da71-4009-afe9-0eef1ae950e6\") " Sep 5 23:55:03.163755 kubelet[2596]: I0905 23:55:03.163243 2596 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ff5e92fd-da71-4009-afe9-0eef1ae950e6-bpf-maps\") pod \"ff5e92fd-da71-4009-afe9-0eef1ae950e6\" (UID: \"ff5e92fd-da71-4009-afe9-0eef1ae950e6\") " Sep 5 23:55:03.163755 kubelet[2596]: I0905 23:55:03.163287 2596 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/43baeec1-51dd-45df-b7ee-b6b3faf1b6bd-cilium-config-path\") on node \"ci-4081-3-5-n-6045d3ec0a\" DevicePath \"\"" Sep 5 23:55:03.163755 kubelet[2596]: I0905 23:55:03.163301 2596 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xv54b\" (UniqueName: \"kubernetes.io/projected/43baeec1-51dd-45df-b7ee-b6b3faf1b6bd-kube-api-access-xv54b\") on node \"ci-4081-3-5-n-6045d3ec0a\" DevicePath \"\"" Sep 5 23:55:03.163755 kubelet[2596]: I0905 23:55:03.163357 2596 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff5e92fd-da71-4009-afe9-0eef1ae950e6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ff5e92fd-da71-4009-afe9-0eef1ae950e6" (UID: "ff5e92fd-da71-4009-afe9-0eef1ae950e6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 23:55:03.164083 kubelet[2596]: I0905 23:55:03.163394 2596 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff5e92fd-da71-4009-afe9-0eef1ae950e6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ff5e92fd-da71-4009-afe9-0eef1ae950e6" (UID: "ff5e92fd-da71-4009-afe9-0eef1ae950e6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 23:55:03.164083 kubelet[2596]: I0905 23:55:03.163411 2596 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff5e92fd-da71-4009-afe9-0eef1ae950e6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ff5e92fd-da71-4009-afe9-0eef1ae950e6" (UID: "ff5e92fd-da71-4009-afe9-0eef1ae950e6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 23:55:03.164083 kubelet[2596]: I0905 23:55:03.163428 2596 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff5e92fd-da71-4009-afe9-0eef1ae950e6-cni-path" (OuterVolumeSpecName: "cni-path") pod "ff5e92fd-da71-4009-afe9-0eef1ae950e6" (UID: "ff5e92fd-da71-4009-afe9-0eef1ae950e6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 23:55:03.164083 kubelet[2596]: I0905 23:55:03.163768 2596 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff5e92fd-da71-4009-afe9-0eef1ae950e6-hostproc" (OuterVolumeSpecName: "hostproc") pod "ff5e92fd-da71-4009-afe9-0eef1ae950e6" (UID: "ff5e92fd-da71-4009-afe9-0eef1ae950e6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 23:55:03.164083 kubelet[2596]: I0905 23:55:03.163796 2596 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff5e92fd-da71-4009-afe9-0eef1ae950e6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ff5e92fd-da71-4009-afe9-0eef1ae950e6" (UID: "ff5e92fd-da71-4009-afe9-0eef1ae950e6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 23:55:03.164397 kubelet[2596]: I0905 23:55:03.163811 2596 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff5e92fd-da71-4009-afe9-0eef1ae950e6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ff5e92fd-da71-4009-afe9-0eef1ae950e6" (UID: "ff5e92fd-da71-4009-afe9-0eef1ae950e6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 23:55:03.164397 kubelet[2596]: I0905 23:55:03.163850 2596 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff5e92fd-da71-4009-afe9-0eef1ae950e6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ff5e92fd-da71-4009-afe9-0eef1ae950e6" (UID: "ff5e92fd-da71-4009-afe9-0eef1ae950e6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 23:55:03.166533 kubelet[2596]: I0905 23:55:03.166466 2596 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff5e92fd-da71-4009-afe9-0eef1ae950e6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ff5e92fd-da71-4009-afe9-0eef1ae950e6" (UID: "ff5e92fd-da71-4009-afe9-0eef1ae950e6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 23:55:03.167325 kubelet[2596]: I0905 23:55:03.166872 2596 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff5e92fd-da71-4009-afe9-0eef1ae950e6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ff5e92fd-da71-4009-afe9-0eef1ae950e6" (UID: "ff5e92fd-da71-4009-afe9-0eef1ae950e6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 5 23:55:03.167325 kubelet[2596]: I0905 23:55:03.166933 2596 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff5e92fd-da71-4009-afe9-0eef1ae950e6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ff5e92fd-da71-4009-afe9-0eef1ae950e6" (UID: "ff5e92fd-da71-4009-afe9-0eef1ae950e6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 23:55:03.170782 kubelet[2596]: I0905 23:55:03.170736 2596 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff5e92fd-da71-4009-afe9-0eef1ae950e6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ff5e92fd-da71-4009-afe9-0eef1ae950e6" (UID: "ff5e92fd-da71-4009-afe9-0eef1ae950e6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 5 23:55:03.171087 kubelet[2596]: I0905 23:55:03.171056 2596 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff5e92fd-da71-4009-afe9-0eef1ae950e6-kube-api-access-rhxl8" (OuterVolumeSpecName: "kube-api-access-rhxl8") pod "ff5e92fd-da71-4009-afe9-0eef1ae950e6" (UID: "ff5e92fd-da71-4009-afe9-0eef1ae950e6"). InnerVolumeSpecName "kube-api-access-rhxl8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 5 23:55:03.171425 kubelet[2596]: I0905 23:55:03.171399 2596 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff5e92fd-da71-4009-afe9-0eef1ae950e6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ff5e92fd-da71-4009-afe9-0eef1ae950e6" (UID: "ff5e92fd-da71-4009-afe9-0eef1ae950e6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 5 23:55:03.264352 kubelet[2596]: I0905 23:55:03.264249 2596 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ff5e92fd-da71-4009-afe9-0eef1ae950e6-hostproc\") on node \"ci-4081-3-5-n-6045d3ec0a\" DevicePath \"\"" Sep 5 23:55:03.264352 kubelet[2596]: I0905 23:55:03.264314 2596 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff5e92fd-da71-4009-afe9-0eef1ae950e6-xtables-lock\") on node \"ci-4081-3-5-n-6045d3ec0a\" DevicePath \"\"" Sep 5 23:55:03.264352 kubelet[2596]: I0905 23:55:03.264333 2596 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ff5e92fd-da71-4009-afe9-0eef1ae950e6-clustermesh-secrets\") on node \"ci-4081-3-5-n-6045d3ec0a\" DevicePath \"\"" Sep 5 23:55:03.264352 kubelet[2596]: I0905 23:55:03.264347 2596 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ff5e92fd-da71-4009-afe9-0eef1ae950e6-etc-cni-netd\") on node \"ci-4081-3-5-n-6045d3ec0a\" DevicePath \"\"" Sep 5 23:55:03.264352 kubelet[2596]: I0905 23:55:03.264365 2596 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ff5e92fd-da71-4009-afe9-0eef1ae950e6-cilium-config-path\") on node \"ci-4081-3-5-n-6045d3ec0a\" DevicePath \"\"" Sep 5 23:55:03.264352 kubelet[2596]: I0905 23:55:03.264384 2596 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ff5e92fd-da71-4009-afe9-0eef1ae950e6-bpf-maps\") on node \"ci-4081-3-5-n-6045d3ec0a\" DevicePath \"\"" Sep 5 23:55:03.264832 kubelet[2596]: I0905 23:55:03.264396 2596 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ff5e92fd-da71-4009-afe9-0eef1ae950e6-host-proc-sys-net\") on node \"ci-4081-3-5-n-6045d3ec0a\" DevicePath \"\"" Sep 5 23:55:03.264832 kubelet[2596]: I0905 23:55:03.264407 2596 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff5e92fd-da71-4009-afe9-0eef1ae950e6-lib-modules\") on node \"ci-4081-3-5-n-6045d3ec0a\" DevicePath \"\"" Sep 5 23:55:03.264832 kubelet[2596]: I0905 23:55:03.264418 2596 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ff5e92fd-da71-4009-afe9-0eef1ae950e6-cni-path\") on node \"ci-4081-3-5-n-6045d3ec0a\" DevicePath \"\"" Sep 5 23:55:03.264832 kubelet[2596]: I0905 23:55:03.264430 2596 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ff5e92fd-da71-4009-afe9-0eef1ae950e6-hubble-tls\") on node \"ci-4081-3-5-n-6045d3ec0a\" DevicePath \"\"" Sep 5 23:55:03.264832 kubelet[2596]: I0905 23:55:03.264440 2596 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ff5e92fd-da71-4009-afe9-0eef1ae950e6-cilium-run\") on node \"ci-4081-3-5-n-6045d3ec0a\" DevicePath \"\"" Sep 5 23:55:03.264832 kubelet[2596]: I0905 23:55:03.264451 2596 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ff5e92fd-da71-4009-afe9-0eef1ae950e6-host-proc-sys-kernel\") on node \"ci-4081-3-5-n-6045d3ec0a\" DevicePath \"\"" Sep 5 23:55:03.264832 kubelet[2596]: I0905 23:55:03.264463 2596 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ff5e92fd-da71-4009-afe9-0eef1ae950e6-cilium-cgroup\") on node \"ci-4081-3-5-n-6045d3ec0a\" DevicePath \"\"" Sep 5 23:55:03.264832 kubelet[2596]: I0905 23:55:03.264480 2596 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rhxl8\" (UniqueName: \"kubernetes.io/projected/ff5e92fd-da71-4009-afe9-0eef1ae950e6-kube-api-access-rhxl8\") on node \"ci-4081-3-5-n-6045d3ec0a\" DevicePath \"\"" Sep 5 23:55:03.770977 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cfc20d51423d4043ff71af19f3ec4794ea74c641568b1b108be7d498427247f1-rootfs.mount: Deactivated successfully. Sep 5 23:55:03.771609 systemd[1]: var-lib-kubelet-pods-43baeec1\x2d51dd\x2d45df\x2db7ee\x2db6b3faf1b6bd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxv54b.mount: Deactivated successfully. Sep 5 23:55:03.771932 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-115f43ad54ea1be4448bd3b6241a615debf03a65a23e6d792843f4bb3c813def-rootfs.mount: Deactivated successfully. Sep 5 23:55:03.772249 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-115f43ad54ea1be4448bd3b6241a615debf03a65a23e6d792843f4bb3c813def-shm.mount: Deactivated successfully. Sep 5 23:55:03.772426 systemd[1]: var-lib-kubelet-pods-ff5e92fd\x2dda71\x2d4009\x2dafe9\x2d0eef1ae950e6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drhxl8.mount: Deactivated successfully. Sep 5 23:55:03.772681 systemd[1]: var-lib-kubelet-pods-ff5e92fd\x2dda71\x2d4009\x2dafe9\x2d0eef1ae950e6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 5 23:55:03.772840 systemd[1]: var-lib-kubelet-pods-ff5e92fd\x2dda71\x2d4009\x2dafe9\x2d0eef1ae950e6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 5 23:55:03.791606 kubelet[2596]: I0905 23:55:03.791563 2596 scope.go:117] "RemoveContainer" containerID="b50fe9e232a2c4952cc0bb7e59355facec5aa9935d869db609eceaa68f64d9fe" Sep 5 23:55:03.798072 containerd[1484]: time="2025-09-05T23:55:03.797756654Z" level=info msg="RemoveContainer for \"b50fe9e232a2c4952cc0bb7e59355facec5aa9935d869db609eceaa68f64d9fe\"" Sep 5 23:55:03.804761 systemd[1]: Removed slice kubepods-besteffort-pod43baeec1_51dd_45df_b7ee_b6b3faf1b6bd.slice - libcontainer container kubepods-besteffort-pod43baeec1_51dd_45df_b7ee_b6b3faf1b6bd.slice. Sep 5 23:55:03.810215 systemd[1]: Removed slice kubepods-burstable-podff5e92fd_da71_4009_afe9_0eef1ae950e6.slice - libcontainer container kubepods-burstable-podff5e92fd_da71_4009_afe9_0eef1ae950e6.slice. Sep 5 23:55:03.810319 systemd[1]: kubepods-burstable-podff5e92fd_da71_4009_afe9_0eef1ae950e6.slice: Consumed 7.823s CPU time. Sep 5 23:55:03.811787 containerd[1484]: time="2025-09-05T23:55:03.811698163Z" level=info msg="RemoveContainer for \"b50fe9e232a2c4952cc0bb7e59355facec5aa9935d869db609eceaa68f64d9fe\" returns successfully" Sep 5 23:55:03.812492 kubelet[2596]: I0905 23:55:03.812339 2596 scope.go:117] "RemoveContainer" containerID="b50fe9e232a2c4952cc0bb7e59355facec5aa9935d869db609eceaa68f64d9fe" Sep 5 23:55:03.813025 containerd[1484]: time="2025-09-05T23:55:03.812897116Z" level=error msg="ContainerStatus for \"b50fe9e232a2c4952cc0bb7e59355facec5aa9935d869db609eceaa68f64d9fe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b50fe9e232a2c4952cc0bb7e59355facec5aa9935d869db609eceaa68f64d9fe\": not found" Sep 5 23:55:03.813509 kubelet[2596]: E0905 23:55:03.813210 2596 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b50fe9e232a2c4952cc0bb7e59355facec5aa9935d869db609eceaa68f64d9fe\": not found" containerID="b50fe9e232a2c4952cc0bb7e59355facec5aa9935d869db609eceaa68f64d9fe" Sep 5 23:55:03.813509 kubelet[2596]: I0905 23:55:03.813244 2596 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b50fe9e232a2c4952cc0bb7e59355facec5aa9935d869db609eceaa68f64d9fe"} err="failed to get container status \"b50fe9e232a2c4952cc0bb7e59355facec5aa9935d869db609eceaa68f64d9fe\": rpc error: code = NotFound desc = an error occurred when try to find container \"b50fe9e232a2c4952cc0bb7e59355facec5aa9935d869db609eceaa68f64d9fe\": not found" Sep 5 23:55:03.813509 kubelet[2596]: I0905 23:55:03.813298 2596 scope.go:117] "RemoveContainer" containerID="92fb97589bfc19350a0b7774798d182b4ec76e40dd2d03f28a98eb0c79f96046" Sep 5 23:55:03.817545 containerd[1484]: time="2025-09-05T23:55:03.817252437Z" level=info msg="RemoveContainer for \"92fb97589bfc19350a0b7774798d182b4ec76e40dd2d03f28a98eb0c79f96046\"" Sep 5 23:55:03.820896 containerd[1484]: time="2025-09-05T23:55:03.820837737Z" level=info msg="RemoveContainer for \"92fb97589bfc19350a0b7774798d182b4ec76e40dd2d03f28a98eb0c79f96046\" returns successfully" Sep 5 23:55:03.821336 kubelet[2596]: I0905 23:55:03.821302 2596 scope.go:117] "RemoveContainer" containerID="5b6de473e00aeefcf8e1fbbf3058772e84bd31c60f29915a4c71ef82cd498a73" Sep 5 23:55:03.827459 containerd[1484]: time="2025-09-05T23:55:03.825597550Z" level=info msg="RemoveContainer for \"5b6de473e00aeefcf8e1fbbf3058772e84bd31c60f29915a4c71ef82cd498a73\"" Sep 5 23:55:03.831402 containerd[1484]: time="2025-09-05T23:55:03.829879229Z" level=info msg="RemoveContainer for \"5b6de473e00aeefcf8e1fbbf3058772e84bd31c60f29915a4c71ef82cd498a73\" returns successfully" Sep 5 23:55:03.831673 kubelet[2596]: I0905 23:55:03.830269 2596 scope.go:117] "RemoveContainer" containerID="9673677d76a3bf59076aacfabbc4d15b64c3f7bb8c47a87f7b7d08d04484d79a" Sep 5 23:55:03.836913 containerd[1484]: time="2025-09-05T23:55:03.836843583Z" level=info msg="RemoveContainer for \"9673677d76a3bf59076aacfabbc4d15b64c3f7bb8c47a87f7b7d08d04484d79a\"" Sep 5 23:55:03.841221 containerd[1484]: time="2025-09-05T23:55:03.841110702Z" level=info msg="RemoveContainer for \"9673677d76a3bf59076aacfabbc4d15b64c3f7bb8c47a87f7b7d08d04484d79a\" returns successfully" Sep 5 23:55:03.841558 kubelet[2596]: I0905 23:55:03.841512 2596 scope.go:117] "RemoveContainer" containerID="f7aef4c721fa6469e4c9d1eb09b49d4e8000f0546d27307d15da40890d8a4c03" Sep 5 23:55:03.844776 containerd[1484]: time="2025-09-05T23:55:03.844003863Z" level=info msg="RemoveContainer for \"f7aef4c721fa6469e4c9d1eb09b49d4e8000f0546d27307d15da40890d8a4c03\"" Sep 5 23:55:03.847070 containerd[1484]: time="2025-09-05T23:55:03.846988466Z" level=info msg="RemoveContainer for \"f7aef4c721fa6469e4c9d1eb09b49d4e8000f0546d27307d15da40890d8a4c03\" returns successfully" Sep 5 23:55:03.847303 kubelet[2596]: I0905 23:55:03.847247 2596 scope.go:117] "RemoveContainer" containerID="d6a0ef29690e812d4d307a5e93462f623f6b4ee6e059bca0695b0703dbcc9141" Sep 5 23:55:03.849079 containerd[1484]: time="2025-09-05T23:55:03.848862158Z" level=info msg="RemoveContainer for \"d6a0ef29690e812d4d307a5e93462f623f6b4ee6e059bca0695b0703dbcc9141\"" Sep 5 23:55:03.854212 containerd[1484]: time="2025-09-05T23:55:03.854167826Z" level=info msg="RemoveContainer for \"d6a0ef29690e812d4d307a5e93462f623f6b4ee6e059bca0695b0703dbcc9141\" returns successfully" Sep 5 23:55:03.855037 kubelet[2596]: I0905 23:55:03.854998 2596 scope.go:117] "RemoveContainer" containerID="92fb97589bfc19350a0b7774798d182b4ec76e40dd2d03f28a98eb0c79f96046" Sep 5 23:55:03.855876 containerd[1484]: time="2025-09-05T23:55:03.855798792Z" level=error msg="ContainerStatus for \"92fb97589bfc19350a0b7774798d182b4ec76e40dd2d03f28a98eb0c79f96046\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"92fb97589bfc19350a0b7774798d182b4ec76e40dd2d03f28a98eb0c79f96046\": not found" Sep 5 23:55:03.856114 kubelet[2596]: E0905 23:55:03.856046 2596 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"92fb97589bfc19350a0b7774798d182b4ec76e40dd2d03f28a98eb0c79f96046\": not found" containerID="92fb97589bfc19350a0b7774798d182b4ec76e40dd2d03f28a98eb0c79f96046" Sep 5 23:55:03.856114 kubelet[2596]: I0905 23:55:03.856079 2596 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"92fb97589bfc19350a0b7774798d182b4ec76e40dd2d03f28a98eb0c79f96046"} err="failed to get container status \"92fb97589bfc19350a0b7774798d182b4ec76e40dd2d03f28a98eb0c79f96046\": rpc error: code = NotFound desc = an error occurred when try to find container \"92fb97589bfc19350a0b7774798d182b4ec76e40dd2d03f28a98eb0c79f96046\": not found" Sep 5 23:55:03.856114 kubelet[2596]: I0905 23:55:03.856104 2596 scope.go:117] "RemoveContainer" containerID="5b6de473e00aeefcf8e1fbbf3058772e84bd31c60f29915a4c71ef82cd498a73" Sep 5 23:55:03.857442 containerd[1484]: time="2025-09-05T23:55:03.857402836Z" level=error msg="ContainerStatus for \"5b6de473e00aeefcf8e1fbbf3058772e84bd31c60f29915a4c71ef82cd498a73\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5b6de473e00aeefcf8e1fbbf3058772e84bd31c60f29915a4c71ef82cd498a73\": not found" Sep 5 23:55:03.857848 kubelet[2596]: E0905 23:55:03.857822 2596 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5b6de473e00aeefcf8e1fbbf3058772e84bd31c60f29915a4c71ef82cd498a73\": not found" containerID="5b6de473e00aeefcf8e1fbbf3058772e84bd31c60f29915a4c71ef82cd498a73" Sep 5 23:55:03.858164 kubelet[2596]: I0905 23:55:03.858005 2596 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5b6de473e00aeefcf8e1fbbf3058772e84bd31c60f29915a4c71ef82cd498a73"} err="failed to get container status \"5b6de473e00aeefcf8e1fbbf3058772e84bd31c60f29915a4c71ef82cd498a73\": rpc error: code = NotFound desc = an error occurred when try to find container \"5b6de473e00aeefcf8e1fbbf3058772e84bd31c60f29915a4c71ef82cd498a73\": not found" Sep 5 23:55:03.858164 kubelet[2596]: I0905 23:55:03.858039 2596 scope.go:117] "RemoveContainer" containerID="9673677d76a3bf59076aacfabbc4d15b64c3f7bb8c47a87f7b7d08d04484d79a" Sep 5 23:55:03.858503 containerd[1484]: time="2025-09-05T23:55:03.858404264Z" level=error msg="ContainerStatus for \"9673677d76a3bf59076aacfabbc4d15b64c3f7bb8c47a87f7b7d08d04484d79a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9673677d76a3bf59076aacfabbc4d15b64c3f7bb8c47a87f7b7d08d04484d79a\": not found" Sep 5 23:55:03.858609 kubelet[2596]: E0905 23:55:03.858534 2596 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9673677d76a3bf59076aacfabbc4d15b64c3f7bb8c47a87f7b7d08d04484d79a\": not found" containerID="9673677d76a3bf59076aacfabbc4d15b64c3f7bb8c47a87f7b7d08d04484d79a" Sep 5 23:55:03.858609 kubelet[2596]: I0905 23:55:03.858559 2596 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9673677d76a3bf59076aacfabbc4d15b64c3f7bb8c47a87f7b7d08d04484d79a"} err="failed to get container status \"9673677d76a3bf59076aacfabbc4d15b64c3f7bb8c47a87f7b7d08d04484d79a\": rpc error: code = NotFound desc = an error occurred when try to find container \"9673677d76a3bf59076aacfabbc4d15b64c3f7bb8c47a87f7b7d08d04484d79a\": not found" Sep 5 23:55:03.858609 kubelet[2596]: I0905 23:55:03.858578 2596 scope.go:117] "RemoveContainer" containerID="f7aef4c721fa6469e4c9d1eb09b49d4e8000f0546d27307d15da40890d8a4c03" Sep 5 23:55:03.860270 containerd[1484]: time="2025-09-05T23:55:03.859864225Z" level=error msg="ContainerStatus for \"f7aef4c721fa6469e4c9d1eb09b49d4e8000f0546d27307d15da40890d8a4c03\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f7aef4c721fa6469e4c9d1eb09b49d4e8000f0546d27307d15da40890d8a4c03\": not found" Sep 5 23:55:03.860363 kubelet[2596]: E0905 23:55:03.860143 2596 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f7aef4c721fa6469e4c9d1eb09b49d4e8000f0546d27307d15da40890d8a4c03\": not found" containerID="f7aef4c721fa6469e4c9d1eb09b49d4e8000f0546d27307d15da40890d8a4c03" Sep 5 23:55:03.860363 kubelet[2596]: I0905 23:55:03.860167 2596 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f7aef4c721fa6469e4c9d1eb09b49d4e8000f0546d27307d15da40890d8a4c03"} err="failed to get container status \"f7aef4c721fa6469e4c9d1eb09b49d4e8000f0546d27307d15da40890d8a4c03\": rpc error: code = NotFound desc = an error occurred when try to find container \"f7aef4c721fa6469e4c9d1eb09b49d4e8000f0546d27307d15da40890d8a4c03\": not found" Sep 5 23:55:03.860363 kubelet[2596]: I0905 23:55:03.860184 2596 scope.go:117] "RemoveContainer" containerID="d6a0ef29690e812d4d307a5e93462f623f6b4ee6e059bca0695b0703dbcc9141" Sep 5 23:55:03.862283 containerd[1484]: time="2025-09-05T23:55:03.861472790Z" level=error msg="ContainerStatus for \"d6a0ef29690e812d4d307a5e93462f623f6b4ee6e059bca0695b0703dbcc9141\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d6a0ef29690e812d4d307a5e93462f623f6b4ee6e059bca0695b0703dbcc9141\": not found" Sep 5 23:55:03.862628 kubelet[2596]: E0905 23:55:03.862295 2596 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d6a0ef29690e812d4d307a5e93462f623f6b4ee6e059bca0695b0703dbcc9141\": not found" containerID="d6a0ef29690e812d4d307a5e93462f623f6b4ee6e059bca0695b0703dbcc9141" Sep 5 23:55:03.862628 kubelet[2596]: I0905 23:55:03.862329 2596 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d6a0ef29690e812d4d307a5e93462f623f6b4ee6e059bca0695b0703dbcc9141"} err="failed to get container status \"d6a0ef29690e812d4d307a5e93462f623f6b4ee6e059bca0695b0703dbcc9141\": rpc error: code = NotFound desc = an error occurred when try to find container \"d6a0ef29690e812d4d307a5e93462f623f6b4ee6e059bca0695b0703dbcc9141\": not found" Sep 5 23:55:04.105752 kubelet[2596]: I0905 23:55:04.105606 2596 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43baeec1-51dd-45df-b7ee-b6b3faf1b6bd" path="/var/lib/kubelet/pods/43baeec1-51dd-45df-b7ee-b6b3faf1b6bd/volumes" Sep 5 23:55:04.107851 kubelet[2596]: I0905 23:55:04.107738 2596 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff5e92fd-da71-4009-afe9-0eef1ae950e6" path="/var/lib/kubelet/pods/ff5e92fd-da71-4009-afe9-0eef1ae950e6/volumes" Sep 5 23:55:04.848576 sshd[4323]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:04.854772 systemd[1]: sshd@49-128.140.56.156:22-139.178.68.195:34666.service: Deactivated successfully. Sep 5 23:55:04.857535 systemd[1]: session-22.scope: Deactivated successfully. Sep 5 23:55:04.857800 systemd[1]: session-22.scope: Consumed 1.570s CPU time. Sep 5 23:55:04.861331 systemd-logind[1460]: Session 22 logged out. Waiting for processes to exit. Sep 5 23:55:04.862593 systemd-logind[1460]: Removed session 22. Sep 5 23:55:05.021995 systemd[1]: Started sshd@50-128.140.56.156:22-139.178.68.195:52756.service - OpenSSH per-connection server daemon (139.178.68.195:52756). Sep 5 23:55:05.240889 kubelet[2596]: E0905 23:55:05.240766 2596 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 5 23:55:06.038856 sshd[4482]: Accepted publickey for core from 139.178.68.195 port 52756 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:55:06.042000 sshd[4482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:06.048289 systemd-logind[1460]: New session 23 of user core. Sep 5 23:55:06.055698 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 5 23:55:07.611374 systemd[1]: Created slice kubepods-burstable-pod6fc01597_b56b_4144_9f27_90281fe70a90.slice - libcontainer container kubepods-burstable-pod6fc01597_b56b_4144_9f27_90281fe70a90.slice. Sep 5 23:55:07.693831 kubelet[2596]: I0905 23:55:07.693240 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6fc01597-b56b-4144-9f27-90281fe70a90-bpf-maps\") pod \"cilium-7z2jw\" (UID: \"6fc01597-b56b-4144-9f27-90281fe70a90\") " pod="kube-system/cilium-7z2jw" Sep 5 23:55:07.693831 kubelet[2596]: I0905 23:55:07.693316 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6fc01597-b56b-4144-9f27-90281fe70a90-cilium-config-path\") pod \"cilium-7z2jw\" (UID: \"6fc01597-b56b-4144-9f27-90281fe70a90\") " pod="kube-system/cilium-7z2jw" Sep 5 23:55:07.693831 kubelet[2596]: I0905 23:55:07.693350 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6fc01597-b56b-4144-9f27-90281fe70a90-hubble-tls\") pod \"cilium-7z2jw\" (UID: \"6fc01597-b56b-4144-9f27-90281fe70a90\") " pod="kube-system/cilium-7z2jw" Sep 5 23:55:07.693831 kubelet[2596]: I0905 23:55:07.693377 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6fc01597-b56b-4144-9f27-90281fe70a90-host-proc-sys-net\") pod \"cilium-7z2jw\" (UID: \"6fc01597-b56b-4144-9f27-90281fe70a90\") " pod="kube-system/cilium-7z2jw" Sep 5 23:55:07.693831 kubelet[2596]: I0905 23:55:07.693410 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6fc01597-b56b-4144-9f27-90281fe70a90-clustermesh-secrets\") pod \"cilium-7z2jw\" (UID: \"6fc01597-b56b-4144-9f27-90281fe70a90\") " pod="kube-system/cilium-7z2jw" Sep 5 23:55:07.693831 kubelet[2596]: I0905 23:55:07.693439 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rddjv\" (UniqueName: \"kubernetes.io/projected/6fc01597-b56b-4144-9f27-90281fe70a90-kube-api-access-rddjv\") pod \"cilium-7z2jw\" (UID: \"6fc01597-b56b-4144-9f27-90281fe70a90\") " pod="kube-system/cilium-7z2jw" Sep 5 23:55:07.694742 kubelet[2596]: I0905 23:55:07.693473 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6fc01597-b56b-4144-9f27-90281fe70a90-host-proc-sys-kernel\") pod \"cilium-7z2jw\" (UID: \"6fc01597-b56b-4144-9f27-90281fe70a90\") " pod="kube-system/cilium-7z2jw" Sep 5 23:55:07.694742 kubelet[2596]: I0905 23:55:07.693501 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6fc01597-b56b-4144-9f27-90281fe70a90-cilium-run\") pod \"cilium-7z2jw\" (UID: \"6fc01597-b56b-4144-9f27-90281fe70a90\") " pod="kube-system/cilium-7z2jw" Sep 5 23:55:07.694742 kubelet[2596]: I0905 23:55:07.693559 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6fc01597-b56b-4144-9f27-90281fe70a90-cilium-cgroup\") pod \"cilium-7z2jw\" (UID: \"6fc01597-b56b-4144-9f27-90281fe70a90\") " pod="kube-system/cilium-7z2jw" Sep 5 23:55:07.694742 kubelet[2596]: I0905 23:55:07.693592 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6fc01597-b56b-4144-9f27-90281fe70a90-etc-cni-netd\") pod \"cilium-7z2jw\" (UID: \"6fc01597-b56b-4144-9f27-90281fe70a90\") " pod="kube-system/cilium-7z2jw" Sep 5 23:55:07.694742 kubelet[2596]: I0905 23:55:07.693625 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6fc01597-b56b-4144-9f27-90281fe70a90-hostproc\") pod \"cilium-7z2jw\" (UID: \"6fc01597-b56b-4144-9f27-90281fe70a90\") " pod="kube-system/cilium-7z2jw" Sep 5 23:55:07.694742 kubelet[2596]: I0905 23:55:07.693657 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6fc01597-b56b-4144-9f27-90281fe70a90-cilium-ipsec-secrets\") pod \"cilium-7z2jw\" (UID: \"6fc01597-b56b-4144-9f27-90281fe70a90\") " pod="kube-system/cilium-7z2jw" Sep 5 23:55:07.695048 kubelet[2596]: I0905 23:55:07.693702 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6fc01597-b56b-4144-9f27-90281fe70a90-cni-path\") pod \"cilium-7z2jw\" (UID: \"6fc01597-b56b-4144-9f27-90281fe70a90\") " pod="kube-system/cilium-7z2jw" Sep 5 23:55:07.695048 kubelet[2596]: I0905 23:55:07.693734 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6fc01597-b56b-4144-9f27-90281fe70a90-lib-modules\") pod \"cilium-7z2jw\" (UID: \"6fc01597-b56b-4144-9f27-90281fe70a90\") " pod="kube-system/cilium-7z2jw" Sep 5 23:55:07.695048 kubelet[2596]: I0905 23:55:07.693765 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6fc01597-b56b-4144-9f27-90281fe70a90-xtables-lock\") pod \"cilium-7z2jw\" (UID: \"6fc01597-b56b-4144-9f27-90281fe70a90\") " pod="kube-system/cilium-7z2jw" Sep 5 23:55:07.735381 sshd[4482]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:07.740373 systemd-logind[1460]: Session 23 logged out. Waiting for processes to exit. Sep 5 23:55:07.740661 systemd[1]: sshd@50-128.140.56.156:22-139.178.68.195:52756.service: Deactivated successfully. Sep 5 23:55:07.744632 systemd[1]: session-23.scope: Deactivated successfully. Sep 5 23:55:07.748070 systemd-logind[1460]: Removed session 23. Sep 5 23:55:07.914452 systemd[1]: Started sshd@51-128.140.56.156:22-139.178.68.195:52772.service - OpenSSH per-connection server daemon (139.178.68.195:52772). Sep 5 23:55:07.917543 containerd[1484]: time="2025-09-05T23:55:07.917506541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7z2jw,Uid:6fc01597-b56b-4144-9f27-90281fe70a90,Namespace:kube-system,Attempt:0,}" Sep 5 23:55:07.947293 containerd[1484]: time="2025-09-05T23:55:07.947115695Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:55:07.947464 containerd[1484]: time="2025-09-05T23:55:07.947440984Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:55:07.947619 containerd[1484]: time="2025-09-05T23:55:07.947561107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:55:07.947871 containerd[1484]: time="2025-09-05T23:55:07.947838155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:55:07.969470 systemd[1]: Started cri-containerd-124253d0b9a816b3b3c815f8aed8e893e4ce0bd9a70b87631afaaf3cfb4c2397.scope - libcontainer container 124253d0b9a816b3b3c815f8aed8e893e4ce0bd9a70b87631afaaf3cfb4c2397. Sep 5 23:55:07.998562 containerd[1484]: time="2025-09-05T23:55:07.998507033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7z2jw,Uid:6fc01597-b56b-4144-9f27-90281fe70a90,Namespace:kube-system,Attempt:0,} returns sandbox id \"124253d0b9a816b3b3c815f8aed8e893e4ce0bd9a70b87631afaaf3cfb4c2397\"" Sep 5 23:55:08.005931 containerd[1484]: time="2025-09-05T23:55:08.005777547Z" level=info msg="CreateContainer within sandbox \"124253d0b9a816b3b3c815f8aed8e893e4ce0bd9a70b87631afaaf3cfb4c2397\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 5 23:55:08.015516 containerd[1484]: time="2025-09-05T23:55:08.015443244Z" level=info msg="CreateContainer within sandbox \"124253d0b9a816b3b3c815f8aed8e893e4ce0bd9a70b87631afaaf3cfb4c2397\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"eff9f29a3d3899cf050991ae5ffdf0288f5d6d224a9cd2796919dc4d93c1d395\"" Sep 5 23:55:08.017889 containerd[1484]: time="2025-09-05T23:55:08.016549753Z" level=info msg="StartContainer for \"eff9f29a3d3899cf050991ae5ffdf0288f5d6d224a9cd2796919dc4d93c1d395\"" Sep 5 23:55:08.045580 systemd[1]: Started cri-containerd-eff9f29a3d3899cf050991ae5ffdf0288f5d6d224a9cd2796919dc4d93c1d395.scope - libcontainer container eff9f29a3d3899cf050991ae5ffdf0288f5d6d224a9cd2796919dc4d93c1d395. Sep 5 23:55:08.079175 containerd[1484]: time="2025-09-05T23:55:08.079090335Z" level=info msg="StartContainer for \"eff9f29a3d3899cf050991ae5ffdf0288f5d6d224a9cd2796919dc4d93c1d395\" returns successfully" Sep 5 23:55:08.089710 systemd[1]: cri-containerd-eff9f29a3d3899cf050991ae5ffdf0288f5d6d224a9cd2796919dc4d93c1d395.scope: Deactivated successfully. Sep 5 23:55:08.128870 containerd[1484]: time="2025-09-05T23:55:08.128728453Z" level=info msg="shim disconnected" id=eff9f29a3d3899cf050991ae5ffdf0288f5d6d224a9cd2796919dc4d93c1d395 namespace=k8s.io Sep 5 23:55:08.128870 containerd[1484]: time="2025-09-05T23:55:08.128863897Z" level=warning msg="cleaning up after shim disconnected" id=eff9f29a3d3899cf050991ae5ffdf0288f5d6d224a9cd2796919dc4d93c1d395 namespace=k8s.io Sep 5 23:55:08.129412 containerd[1484]: time="2025-09-05T23:55:08.128884777Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:55:08.829403 containerd[1484]: time="2025-09-05T23:55:08.829184940Z" level=info msg="CreateContainer within sandbox \"124253d0b9a816b3b3c815f8aed8e893e4ce0bd9a70b87631afaaf3cfb4c2397\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 5 23:55:08.849305 containerd[1484]: time="2025-09-05T23:55:08.849254153Z" level=info msg="CreateContainer within sandbox \"124253d0b9a816b3b3c815f8aed8e893e4ce0bd9a70b87631afaaf3cfb4c2397\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"41ed3be29938e1f4eaf7f8721de900bceaba8d26f5fa86dc64de36a6a6769aa2\"" Sep 5 23:55:08.851253 containerd[1484]: time="2025-09-05T23:55:08.851218045Z" level=info msg="StartContainer for \"41ed3be29938e1f4eaf7f8721de900bceaba8d26f5fa86dc64de36a6a6769aa2\"" Sep 5 23:55:08.889546 systemd[1]: Started cri-containerd-41ed3be29938e1f4eaf7f8721de900bceaba8d26f5fa86dc64de36a6a6769aa2.scope - libcontainer container 41ed3be29938e1f4eaf7f8721de900bceaba8d26f5fa86dc64de36a6a6769aa2. Sep 5 23:55:08.920637 containerd[1484]: time="2025-09-05T23:55:08.920398083Z" level=info msg="StartContainer for \"41ed3be29938e1f4eaf7f8721de900bceaba8d26f5fa86dc64de36a6a6769aa2\" returns successfully" Sep 5 23:55:08.924715 sshd[4502]: Accepted publickey for core from 139.178.68.195 port 52772 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:55:08.926573 sshd[4502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:08.934786 systemd[1]: cri-containerd-41ed3be29938e1f4eaf7f8721de900bceaba8d26f5fa86dc64de36a6a6769aa2.scope: Deactivated successfully. Sep 5 23:55:08.935516 systemd-logind[1460]: New session 24 of user core. Sep 5 23:55:08.942918 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 5 23:55:08.963837 containerd[1484]: time="2025-09-05T23:55:08.963743154Z" level=info msg="shim disconnected" id=41ed3be29938e1f4eaf7f8721de900bceaba8d26f5fa86dc64de36a6a6769aa2 namespace=k8s.io Sep 5 23:55:08.964527 containerd[1484]: time="2025-09-05T23:55:08.964208966Z" level=warning msg="cleaning up after shim disconnected" id=41ed3be29938e1f4eaf7f8721de900bceaba8d26f5fa86dc64de36a6a6769aa2 namespace=k8s.io Sep 5 23:55:08.964527 containerd[1484]: time="2025-09-05T23:55:08.964238607Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:55:09.610866 sshd[4502]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:09.617053 systemd[1]: sshd@51-128.140.56.156:22-139.178.68.195:52772.service: Deactivated successfully. Sep 5 23:55:09.620533 systemd[1]: session-24.scope: Deactivated successfully. Sep 5 23:55:09.625333 systemd-logind[1460]: Session 24 logged out. Waiting for processes to exit. Sep 5 23:55:09.626945 systemd-logind[1460]: Removed session 24. Sep 5 23:55:09.790613 systemd[1]: Started sshd@52-128.140.56.156:22-139.178.68.195:52774.service - OpenSSH per-connection server daemon (139.178.68.195:52774). Sep 5 23:55:09.804409 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41ed3be29938e1f4eaf7f8721de900bceaba8d26f5fa86dc64de36a6a6769aa2-rootfs.mount: Deactivated successfully. Sep 5 23:55:09.831737 containerd[1484]: time="2025-09-05T23:55:09.831669082Z" level=info msg="CreateContainer within sandbox \"124253d0b9a816b3b3c815f8aed8e893e4ce0bd9a70b87631afaaf3cfb4c2397\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 5 23:55:09.855883 containerd[1484]: time="2025-09-05T23:55:09.855837118Z" level=info msg="CreateContainer within sandbox \"124253d0b9a816b3b3c815f8aed8e893e4ce0bd9a70b87631afaaf3cfb4c2397\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"854c25b356c951ed81ad2ce8e0132acd9462790cbd2d82739bd46af75c2805dc\"" Sep 5 23:55:09.856927 containerd[1484]: time="2025-09-05T23:55:09.856900306Z" level=info msg="StartContainer for \"854c25b356c951ed81ad2ce8e0132acd9462790cbd2d82739bd46af75c2805dc\"" Sep 5 23:55:09.916350 systemd[1]: Started cri-containerd-854c25b356c951ed81ad2ce8e0132acd9462790cbd2d82739bd46af75c2805dc.scope - libcontainer container 854c25b356c951ed81ad2ce8e0132acd9462790cbd2d82739bd46af75c2805dc. Sep 5 23:55:09.971634 containerd[1484]: time="2025-09-05T23:55:09.971504602Z" level=info msg="StartContainer for \"854c25b356c951ed81ad2ce8e0132acd9462790cbd2d82739bd46af75c2805dc\" returns successfully" Sep 5 23:55:09.976340 systemd[1]: cri-containerd-854c25b356c951ed81ad2ce8e0132acd9462790cbd2d82739bd46af75c2805dc.scope: Deactivated successfully. Sep 5 23:55:10.013211 containerd[1484]: time="2025-09-05T23:55:10.012444837Z" level=info msg="shim disconnected" id=854c25b356c951ed81ad2ce8e0132acd9462790cbd2d82739bd46af75c2805dc namespace=k8s.io Sep 5 23:55:10.013211 containerd[1484]: time="2025-09-05T23:55:10.012856807Z" level=warning msg="cleaning up after shim disconnected" id=854c25b356c951ed81ad2ce8e0132acd9462790cbd2d82739bd46af75c2805dc namespace=k8s.io Sep 5 23:55:10.013211 containerd[1484]: time="2025-09-05T23:55:10.013049412Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:55:10.028784 containerd[1484]: time="2025-09-05T23:55:10.028616258Z" level=warning msg="cleanup warnings time=\"2025-09-05T23:55:10Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 5 23:55:10.243705 kubelet[2596]: E0905 23:55:10.242985 2596 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 5 23:55:10.779640 sshd[4677]: Accepted publickey for core from 139.178.68.195 port 52774 ssh2: RSA SHA256:+hHHVborSkWo7/0A1ohHVzFaxSLc/9IisClzOe0fYVI Sep 5 23:55:10.782226 sshd[4677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:10.787892 systemd-logind[1460]: New session 25 of user core. Sep 5 23:55:10.799477 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 5 23:55:10.804607 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-854c25b356c951ed81ad2ce8e0132acd9462790cbd2d82739bd46af75c2805dc-rootfs.mount: Deactivated successfully. Sep 5 23:55:10.836539 containerd[1484]: time="2025-09-05T23:55:10.836410077Z" level=info msg="CreateContainer within sandbox \"124253d0b9a816b3b3c815f8aed8e893e4ce0bd9a70b87631afaaf3cfb4c2397\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 5 23:55:10.851616 containerd[1484]: time="2025-09-05T23:55:10.851486350Z" level=info msg="CreateContainer within sandbox \"124253d0b9a816b3b3c815f8aed8e893e4ce0bd9a70b87631afaaf3cfb4c2397\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e439c6915a74dc01f2601ba908abdf5fd9795174fcc98ef0d4b684300bfd85a4\"" Sep 5 23:55:10.852742 containerd[1484]: time="2025-09-05T23:55:10.852528857Z" level=info msg="StartContainer for \"e439c6915a74dc01f2601ba908abdf5fd9795174fcc98ef0d4b684300bfd85a4\"" Sep 5 23:55:10.891533 systemd[1]: Started cri-containerd-e439c6915a74dc01f2601ba908abdf5fd9795174fcc98ef0d4b684300bfd85a4.scope - libcontainer container e439c6915a74dc01f2601ba908abdf5fd9795174fcc98ef0d4b684300bfd85a4. Sep 5 23:55:10.916741 systemd[1]: cri-containerd-e439c6915a74dc01f2601ba908abdf5fd9795174fcc98ef0d4b684300bfd85a4.scope: Deactivated successfully. Sep 5 23:55:10.919758 containerd[1484]: time="2025-09-05T23:55:10.919163954Z" level=info msg="StartContainer for \"e439c6915a74dc01f2601ba908abdf5fd9795174fcc98ef0d4b684300bfd85a4\" returns successfully" Sep 5 23:55:10.946658 containerd[1484]: time="2025-09-05T23:55:10.946344583Z" level=info msg="shim disconnected" id=e439c6915a74dc01f2601ba908abdf5fd9795174fcc98ef0d4b684300bfd85a4 namespace=k8s.io Sep 5 23:55:10.946658 containerd[1484]: time="2025-09-05T23:55:10.946428905Z" level=warning msg="cleaning up after shim disconnected" id=e439c6915a74dc01f2601ba908abdf5fd9795174fcc98ef0d4b684300bfd85a4 namespace=k8s.io Sep 5 23:55:10.946658 containerd[1484]: time="2025-09-05T23:55:10.946445866Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:55:11.806388 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e439c6915a74dc01f2601ba908abdf5fd9795174fcc98ef0d4b684300bfd85a4-rootfs.mount: Deactivated successfully. Sep 5 23:55:11.841614 containerd[1484]: time="2025-09-05T23:55:11.841497517Z" level=info msg="CreateContainer within sandbox \"124253d0b9a816b3b3c815f8aed8e893e4ce0bd9a70b87631afaaf3cfb4c2397\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 5 23:55:11.860911 containerd[1484]: time="2025-09-05T23:55:11.860863937Z" level=info msg="CreateContainer within sandbox \"124253d0b9a816b3b3c815f8aed8e893e4ce0bd9a70b87631afaaf3cfb4c2397\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"38c70ee59fd53073440b5d3ddcb6701e6c45f0051529bd2791f2f632059360d2\"" Sep 5 23:55:11.862789 containerd[1484]: time="2025-09-05T23:55:11.862683104Z" level=info msg="StartContainer for \"38c70ee59fd53073440b5d3ddcb6701e6c45f0051529bd2791f2f632059360d2\"" Sep 5 23:55:11.899367 systemd[1]: Started cri-containerd-38c70ee59fd53073440b5d3ddcb6701e6c45f0051529bd2791f2f632059360d2.scope - libcontainer container 38c70ee59fd53073440b5d3ddcb6701e6c45f0051529bd2791f2f632059360d2. Sep 5 23:55:11.929832 containerd[1484]: time="2025-09-05T23:55:11.929785717Z" level=info msg="StartContainer for \"38c70ee59fd53073440b5d3ddcb6701e6c45f0051529bd2791f2f632059360d2\" returns successfully" Sep 5 23:55:12.242164 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 5 23:55:13.506069 systemd[1]: run-containerd-runc-k8s.io-38c70ee59fd53073440b5d3ddcb6701e6c45f0051529bd2791f2f632059360d2-runc.FCZ9EH.mount: Deactivated successfully. Sep 5 23:55:14.970858 kubelet[2596]: I0905 23:55:14.969291 2596 setters.go:618] "Node became not ready" node="ci-4081-3-5-n-6045d3ec0a" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-05T23:55:14Z","lastTransitionTime":"2025-09-05T23:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 5 23:55:15.204429 systemd-networkd[1375]: lxc_health: Link UP Sep 5 23:55:15.216426 systemd-networkd[1375]: lxc_health: Gained carrier Sep 5 23:55:15.941015 kubelet[2596]: I0905 23:55:15.940277 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7z2jw" podStartSLOduration=8.94026094 podStartE2EDuration="8.94026094s" podCreationTimestamp="2025-09-05 23:55:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:55:12.876530962 +0000 UTC m=+222.898382203" watchObservedRunningTime="2025-09-05 23:55:15.94026094 +0000 UTC m=+225.962112181" Sep 5 23:55:16.636416 systemd-networkd[1375]: lxc_health: Gained IPv6LL Sep 5 23:55:17.843890 systemd[1]: run-containerd-runc-k8s.io-38c70ee59fd53073440b5d3ddcb6701e6c45f0051529bd2791f2f632059360d2-runc.5HEilV.mount: Deactivated successfully. Sep 5 23:55:22.358061 sshd[4677]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:22.363018 systemd[1]: sshd@52-128.140.56.156:22-139.178.68.195:52774.service: Deactivated successfully. Sep 5 23:55:22.365073 systemd[1]: session-25.scope: Deactivated successfully. Sep 5 23:55:22.367667 systemd-logind[1460]: Session 25 logged out. Waiting for processes to exit. Sep 5 23:55:22.371253 systemd-logind[1460]: Removed session 25. Sep 5 23:55:25.072453 systemd[1]: Started sshd@53-128.140.56.156:22-103.99.206.83:54336.service - OpenSSH per-connection server daemon (103.99.206.83:54336). Sep 5 23:55:25.419655 sshd[5472]: Connection closed by 103.99.206.83 port 54336 [preauth] Sep 5 23:55:25.422395 systemd[1]: sshd@53-128.140.56.156:22-103.99.206.83:54336.service: Deactivated successfully. Sep 5 23:55:30.140921 containerd[1484]: time="2025-09-05T23:55:30.140627952Z" level=info msg="StopPodSandbox for \"115f43ad54ea1be4448bd3b6241a615debf03a65a23e6d792843f4bb3c813def\"" Sep 5 23:55:30.140921 containerd[1484]: time="2025-09-05T23:55:30.140738034Z" level=info msg="TearDown network for sandbox \"115f43ad54ea1be4448bd3b6241a615debf03a65a23e6d792843f4bb3c813def\" successfully" Sep 5 23:55:30.140921 containerd[1484]: time="2025-09-05T23:55:30.140751674Z" level=info msg="StopPodSandbox for \"115f43ad54ea1be4448bd3b6241a615debf03a65a23e6d792843f4bb3c813def\" returns successfully" Sep 5 23:55:30.141911 containerd[1484]: time="2025-09-05T23:55:30.141882699Z" level=info msg="RemovePodSandbox for \"115f43ad54ea1be4448bd3b6241a615debf03a65a23e6d792843f4bb3c813def\"" Sep 5 23:55:30.141971 containerd[1484]: time="2025-09-05T23:55:30.141917260Z" level=info msg="Forcibly stopping sandbox \"115f43ad54ea1be4448bd3b6241a615debf03a65a23e6d792843f4bb3c813def\"" Sep 5 23:55:30.142051 containerd[1484]: time="2025-09-05T23:55:30.141979261Z" level=info msg="TearDown network for sandbox \"115f43ad54ea1be4448bd3b6241a615debf03a65a23e6d792843f4bb3c813def\" successfully" Sep 5 23:55:30.145343 containerd[1484]: time="2025-09-05T23:55:30.145284494Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"115f43ad54ea1be4448bd3b6241a615debf03a65a23e6d792843f4bb3c813def\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:55:30.145485 containerd[1484]: time="2025-09-05T23:55:30.145355455Z" level=info msg="RemovePodSandbox \"115f43ad54ea1be4448bd3b6241a615debf03a65a23e6d792843f4bb3c813def\" returns successfully" Sep 5 23:55:30.146468 containerd[1484]: time="2025-09-05T23:55:30.146052350Z" level=info msg="StopPodSandbox for \"cfc20d51423d4043ff71af19f3ec4794ea74c641568b1b108be7d498427247f1\"" Sep 5 23:55:30.146468 containerd[1484]: time="2025-09-05T23:55:30.146183593Z" level=info msg="TearDown network for sandbox \"cfc20d51423d4043ff71af19f3ec4794ea74c641568b1b108be7d498427247f1\" successfully" Sep 5 23:55:30.146468 containerd[1484]: time="2025-09-05T23:55:30.146220354Z" level=info msg="StopPodSandbox for \"cfc20d51423d4043ff71af19f3ec4794ea74c641568b1b108be7d498427247f1\" returns successfully" Sep 5 23:55:30.147099 containerd[1484]: time="2025-09-05T23:55:30.146926609Z" level=info msg="RemovePodSandbox for \"cfc20d51423d4043ff71af19f3ec4794ea74c641568b1b108be7d498427247f1\"" Sep 5 23:55:30.147099 containerd[1484]: time="2025-09-05T23:55:30.146967210Z" level=info msg="Forcibly stopping sandbox \"cfc20d51423d4043ff71af19f3ec4794ea74c641568b1b108be7d498427247f1\"" Sep 5 23:55:30.147099 containerd[1484]: time="2025-09-05T23:55:30.147039212Z" level=info msg="TearDown network for sandbox \"cfc20d51423d4043ff71af19f3ec4794ea74c641568b1b108be7d498427247f1\" successfully" Sep 5 23:55:30.151148 containerd[1484]: time="2025-09-05T23:55:30.151038580Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cfc20d51423d4043ff71af19f3ec4794ea74c641568b1b108be7d498427247f1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:55:30.151148 containerd[1484]: time="2025-09-05T23:55:30.151102541Z" level=info msg="RemovePodSandbox \"cfc20d51423d4043ff71af19f3ec4794ea74c641568b1b108be7d498427247f1\" returns successfully" Sep 5 23:55:54.464262 kubelet[2596]: E0905 23:55:54.464095 2596 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:57572->10.0.0.2:2379: read: connection timed out" Sep 5 23:55:54.469627 systemd[1]: cri-containerd-64ac25581b618bc193eaa13c196c59acff6da75bbe0ead4f0fff64d9f603995a.scope: Deactivated successfully. Sep 5 23:55:54.469913 systemd[1]: cri-containerd-64ac25581b618bc193eaa13c196c59acff6da75bbe0ead4f0fff64d9f603995a.scope: Consumed 4.934s CPU time, 16.2M memory peak, 0B memory swap peak. Sep 5 23:55:54.494797 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-64ac25581b618bc193eaa13c196c59acff6da75bbe0ead4f0fff64d9f603995a-rootfs.mount: Deactivated successfully. Sep 5 23:55:54.497585 containerd[1484]: time="2025-09-05T23:55:54.497505712Z" level=info msg="shim disconnected" id=64ac25581b618bc193eaa13c196c59acff6da75bbe0ead4f0fff64d9f603995a namespace=k8s.io Sep 5 23:55:54.497585 containerd[1484]: time="2025-09-05T23:55:54.497584513Z" level=warning msg="cleaning up after shim disconnected" id=64ac25581b618bc193eaa13c196c59acff6da75bbe0ead4f0fff64d9f603995a namespace=k8s.io Sep 5 23:55:54.497585 containerd[1484]: time="2025-09-05T23:55:54.497594433Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:55:54.549442 systemd[1]: cri-containerd-55d1f97ccdabefaa0b3a2545b9cacb44c3b9bf659b95fe868108a15683f55607.scope: Deactivated successfully. Sep 5 23:55:54.551250 systemd[1]: cri-containerd-55d1f97ccdabefaa0b3a2545b9cacb44c3b9bf659b95fe868108a15683f55607.scope: Consumed 5.075s CPU time, 17.7M memory peak, 0B memory swap peak. Sep 5 23:55:54.574367 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55d1f97ccdabefaa0b3a2545b9cacb44c3b9bf659b95fe868108a15683f55607-rootfs.mount: Deactivated successfully. Sep 5 23:55:54.580706 containerd[1484]: time="2025-09-05T23:55:54.580481795Z" level=info msg="shim disconnected" id=55d1f97ccdabefaa0b3a2545b9cacb44c3b9bf659b95fe868108a15683f55607 namespace=k8s.io Sep 5 23:55:54.580706 containerd[1484]: time="2025-09-05T23:55:54.580632558Z" level=warning msg="cleaning up after shim disconnected" id=55d1f97ccdabefaa0b3a2545b9cacb44c3b9bf659b95fe868108a15683f55607 namespace=k8s.io Sep 5 23:55:54.580706 containerd[1484]: time="2025-09-05T23:55:54.580652358Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:55:54.952938 kubelet[2596]: I0905 23:55:54.952616 2596 scope.go:117] "RemoveContainer" containerID="64ac25581b618bc193eaa13c196c59acff6da75bbe0ead4f0fff64d9f603995a" Sep 5 23:55:54.956323 containerd[1484]: time="2025-09-05T23:55:54.956100889Z" level=info msg="CreateContainer within sandbox \"98d52ca5740e0d7116ab4c7f8ac7dac6749767e1bafd69600739ff1133171bcb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 5 23:55:54.957848 kubelet[2596]: I0905 23:55:54.957599 2596 scope.go:117] "RemoveContainer" containerID="55d1f97ccdabefaa0b3a2545b9cacb44c3b9bf659b95fe868108a15683f55607" Sep 5 23:55:54.960392 containerd[1484]: time="2025-09-05T23:55:54.960206364Z" level=info msg="CreateContainer within sandbox \"0b21bb091ceaf7d752881f21ae8fe5d6aa8e7d9b0c78ec29796b0234059758e3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 5 23:55:54.975912 containerd[1484]: time="2025-09-05T23:55:54.975823691Z" level=info msg="CreateContainer within sandbox \"0b21bb091ceaf7d752881f21ae8fe5d6aa8e7d9b0c78ec29796b0234059758e3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"2d9b5a4df5749261331a0fd9d74d248d7d9061e5e18943d8f5df0e4a4a5e22fd\"" Sep 5 23:55:54.978376 containerd[1484]: time="2025-09-05T23:55:54.978317417Z" level=info msg="StartContainer for \"2d9b5a4df5749261331a0fd9d74d248d7d9061e5e18943d8f5df0e4a4a5e22fd\"" Sep 5 23:55:54.988473 containerd[1484]: time="2025-09-05T23:55:54.988423762Z" level=info msg="CreateContainer within sandbox \"98d52ca5740e0d7116ab4c7f8ac7dac6749767e1bafd69600739ff1133171bcb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"576b10237b2efee421f28893e326280f349606399be4b26bf43a50d8f09efd2b\"" Sep 5 23:55:54.989293 containerd[1484]: time="2025-09-05T23:55:54.989229137Z" level=info msg="StartContainer for \"576b10237b2efee421f28893e326280f349606399be4b26bf43a50d8f09efd2b\"" Sep 5 23:55:55.011280 systemd[1]: Started cri-containerd-2d9b5a4df5749261331a0fd9d74d248d7d9061e5e18943d8f5df0e4a4a5e22fd.scope - libcontainer container 2d9b5a4df5749261331a0fd9d74d248d7d9061e5e18943d8f5df0e4a4a5e22fd. Sep 5 23:55:55.029385 systemd[1]: Started cri-containerd-576b10237b2efee421f28893e326280f349606399be4b26bf43a50d8f09efd2b.scope - libcontainer container 576b10237b2efee421f28893e326280f349606399be4b26bf43a50d8f09efd2b. Sep 5 23:55:55.081538 containerd[1484]: time="2025-09-05T23:55:55.081142694Z" level=info msg="StartContainer for \"2d9b5a4df5749261331a0fd9d74d248d7d9061e5e18943d8f5df0e4a4a5e22fd\" returns successfully" Sep 5 23:55:55.085887 containerd[1484]: time="2025-09-05T23:55:55.085732538Z" level=info msg="StartContainer for \"576b10237b2efee421f28893e326280f349606399be4b26bf43a50d8f09efd2b\" returns successfully" Sep 5 23:55:58.494711 kubelet[2596]: E0905 23:55:58.494520 2596 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:57386->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-5-n-6045d3ec0a.18628830db85820e kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-5-n-6045d3ec0a,UID:badb218c57c7dd5b5fad8db5f5643e05,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-n-6045d3ec0a,},FirstTimestamp:2025-09-05 23:55:48.050682382 +0000 UTC m=+258.072533663,LastTimestamp:2025-09-05 23:55:48.050682382 +0000 UTC m=+258.072533663,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-n-6045d3ec0a,}"