Sep 12 17:13:49.879356 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 12 17:13:49.879387 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Fri Sep 12 15:34:33 -00 2025 Sep 12 17:13:49.879399 kernel: KASLR enabled Sep 12 17:13:49.879405 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Sep 12 17:13:49.879411 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390bb018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Sep 12 17:13:49.879417 kernel: random: crng init done Sep 12 17:13:49.879424 kernel: secureboot: Secure boot disabled Sep 12 17:13:49.879430 kernel: ACPI: Early table checksum verification disabled Sep 12 17:13:49.879436 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Sep 12 17:13:49.879443 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Sep 12 17:13:49.879449 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:13:49.879455 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:13:49.879461 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:13:49.879467 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:13:49.879474 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:13:49.879483 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:13:49.879489 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:13:49.879496 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:13:49.879502 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:13:49.879508 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Sep 12 17:13:49.879514 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Sep 12 17:13:49.879520 kernel: NUMA: Failed to initialise from firmware Sep 12 17:13:49.879527 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Sep 12 17:13:49.879533 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Sep 12 17:13:49.879539 kernel: Zone ranges: Sep 12 17:13:49.879547 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 12 17:13:49.879553 kernel: DMA32 empty Sep 12 17:13:49.879559 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Sep 12 17:13:49.879566 kernel: Movable zone start for each node Sep 12 17:13:49.879572 kernel: Early memory node ranges Sep 12 17:13:49.879578 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Sep 12 17:13:49.879585 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Sep 12 17:13:49.879591 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Sep 12 17:13:49.879597 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Sep 12 17:13:49.879603 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Sep 12 17:13:49.879609 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Sep 12 17:13:49.879615 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Sep 12 17:13:49.879623 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Sep 12 17:13:49.879629 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Sep 12 17:13:49.879636 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Sep 12 17:13:49.879646 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Sep 12 17:13:49.879652 kernel: psci: probing for conduit method from ACPI. Sep 12 17:13:49.879659 kernel: psci: PSCIv1.1 detected in firmware. Sep 12 17:13:49.879667 kernel: psci: Using standard PSCI v0.2 function IDs Sep 12 17:13:49.879674 kernel: psci: Trusted OS migration not required Sep 12 17:13:49.879680 kernel: psci: SMC Calling Convention v1.1 Sep 12 17:13:49.879687 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 12 17:13:49.879694 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 12 17:13:49.879700 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 12 17:13:49.879707 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 12 17:13:49.879714 kernel: Detected PIPT I-cache on CPU0 Sep 12 17:13:49.879720 kernel: CPU features: detected: GIC system register CPU interface Sep 12 17:13:49.879727 kernel: CPU features: detected: Hardware dirty bit management Sep 12 17:13:49.879735 kernel: CPU features: detected: Spectre-v4 Sep 12 17:13:49.879742 kernel: CPU features: detected: Spectre-BHB Sep 12 17:13:49.879748 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 12 17:13:49.879755 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 12 17:13:49.879761 kernel: CPU features: detected: ARM erratum 1418040 Sep 12 17:13:49.879768 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 12 17:13:49.879774 kernel: alternatives: applying boot alternatives Sep 12 17:13:49.879782 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=82b413d7549dba6b35b1edf421a17f61aa80704059d10fedd611b1eff5298abd Sep 12 17:13:49.879789 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:13:49.879795 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 17:13:49.879802 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:13:49.879810 kernel: Fallback order for Node 0: 0 Sep 12 17:13:49.879817 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Sep 12 17:13:49.879824 kernel: Policy zone: Normal Sep 12 17:13:49.879830 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:13:49.879837 kernel: software IO TLB: area num 2. Sep 12 17:13:49.879843 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Sep 12 17:13:49.879850 kernel: Memory: 3883768K/4096000K available (10368K kernel code, 2186K rwdata, 8104K rodata, 38400K init, 897K bss, 212232K reserved, 0K cma-reserved) Sep 12 17:13:49.879857 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 17:13:49.879864 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:13:49.879910 kernel: rcu: RCU event tracing is enabled. Sep 12 17:13:49.879922 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 17:13:49.879929 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:13:49.879938 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:13:49.879945 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:13:49.879952 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 17:13:49.879959 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 12 17:13:49.879965 kernel: GICv3: 256 SPIs implemented Sep 12 17:13:49.879972 kernel: GICv3: 0 Extended SPIs implemented Sep 12 17:13:49.879978 kernel: Root IRQ handler: gic_handle_irq Sep 12 17:13:49.879985 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 12 17:13:49.879991 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 12 17:13:49.879998 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 12 17:13:49.880004 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Sep 12 17:13:49.880013 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Sep 12 17:13:49.880019 kernel: GICv3: using LPI property table @0x00000001000e0000 Sep 12 17:13:49.880026 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Sep 12 17:13:49.880033 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:13:49.880039 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:13:49.880046 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 12 17:13:49.880053 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 12 17:13:49.880059 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 12 17:13:49.880066 kernel: Console: colour dummy device 80x25 Sep 12 17:13:49.880072 kernel: ACPI: Core revision 20230628 Sep 12 17:13:49.880080 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 12 17:13:49.880088 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:13:49.880095 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 17:13:49.880102 kernel: landlock: Up and running. Sep 12 17:13:49.880109 kernel: SELinux: Initializing. Sep 12 17:13:49.880115 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:13:49.880122 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:13:49.880129 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:13:49.880136 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:13:49.880142 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:13:49.880151 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:13:49.880157 kernel: Platform MSI: ITS@0x8080000 domain created Sep 12 17:13:49.880164 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 12 17:13:49.880171 kernel: Remapping and enabling EFI services. Sep 12 17:13:49.880178 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:13:49.880184 kernel: Detected PIPT I-cache on CPU1 Sep 12 17:13:49.880191 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 12 17:13:49.880198 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Sep 12 17:13:49.880226 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:13:49.880236 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 12 17:13:49.880243 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 17:13:49.880254 kernel: SMP: Total of 2 processors activated. Sep 12 17:13:49.880263 kernel: CPU features: detected: 32-bit EL0 Support Sep 12 17:13:49.880270 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 12 17:13:49.880277 kernel: CPU features: detected: Common not Private translations Sep 12 17:13:49.880284 kernel: CPU features: detected: CRC32 instructions Sep 12 17:13:49.880291 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 12 17:13:49.880298 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 12 17:13:49.880307 kernel: CPU features: detected: LSE atomic instructions Sep 12 17:13:49.880314 kernel: CPU features: detected: Privileged Access Never Sep 12 17:13:49.880321 kernel: CPU features: detected: RAS Extension Support Sep 12 17:13:49.880328 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 12 17:13:49.880335 kernel: CPU: All CPU(s) started at EL1 Sep 12 17:13:49.880342 kernel: alternatives: applying system-wide alternatives Sep 12 17:13:49.880349 kernel: devtmpfs: initialized Sep 12 17:13:49.880357 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:13:49.880366 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 17:13:49.880374 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:13:49.880381 kernel: SMBIOS 3.0.0 present. Sep 12 17:13:49.880388 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Sep 12 17:13:49.880396 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:13:49.880403 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 12 17:13:49.880411 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 12 17:13:49.880418 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 12 17:13:49.880425 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:13:49.880434 kernel: audit: type=2000 audit(0.012:1): state=initialized audit_enabled=0 res=1 Sep 12 17:13:49.880441 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:13:49.880448 kernel: cpuidle: using governor menu Sep 12 17:13:49.880456 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 12 17:13:49.880463 kernel: ASID allocator initialised with 32768 entries Sep 12 17:13:49.880470 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:13:49.880477 kernel: Serial: AMBA PL011 UART driver Sep 12 17:13:49.880484 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 12 17:13:49.880491 kernel: Modules: 0 pages in range for non-PLT usage Sep 12 17:13:49.880500 kernel: Modules: 509248 pages in range for PLT usage Sep 12 17:13:49.880508 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:13:49.880515 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:13:49.880522 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 12 17:13:49.880529 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 12 17:13:49.880536 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:13:49.880543 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:13:49.880551 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 12 17:13:49.880558 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 12 17:13:49.880566 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:13:49.880574 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:13:49.880581 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:13:49.880588 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 17:13:49.880595 kernel: ACPI: Interpreter enabled Sep 12 17:13:49.880602 kernel: ACPI: Using GIC for interrupt routing Sep 12 17:13:49.880609 kernel: ACPI: MCFG table detected, 1 entries Sep 12 17:13:49.880616 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 12 17:13:49.880623 kernel: printk: console [ttyAMA0] enabled Sep 12 17:13:49.880632 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 17:13:49.880800 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:13:49.880898 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 12 17:13:49.880971 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 12 17:13:49.884253 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 12 17:13:49.884361 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 12 17:13:49.884372 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 12 17:13:49.884387 kernel: PCI host bridge to bus 0000:00 Sep 12 17:13:49.884468 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 12 17:13:49.884528 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 12 17:13:49.884585 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 12 17:13:49.884642 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 17:13:49.884726 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 12 17:13:49.884807 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Sep 12 17:13:49.884925 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Sep 12 17:13:49.885002 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Sep 12 17:13:49.885085 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Sep 12 17:13:49.885151 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Sep 12 17:13:49.887265 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Sep 12 17:13:49.887376 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Sep 12 17:13:49.887460 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Sep 12 17:13:49.887524 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Sep 12 17:13:49.887595 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Sep 12 17:13:49.887660 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Sep 12 17:13:49.887731 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Sep 12 17:13:49.887794 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Sep 12 17:13:49.887868 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Sep 12 17:13:49.887961 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Sep 12 17:13:49.888038 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Sep 12 17:13:49.888104 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Sep 12 17:13:49.888188 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Sep 12 17:13:49.889400 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Sep 12 17:13:49.889506 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Sep 12 17:13:49.889578 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Sep 12 17:13:49.889663 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Sep 12 17:13:49.889731 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Sep 12 17:13:49.889814 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Sep 12 17:13:49.889936 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Sep 12 17:13:49.890030 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 12 17:13:49.890105 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Sep 12 17:13:49.890186 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Sep 12 17:13:49.890310 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Sep 12 17:13:49.890398 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Sep 12 17:13:49.890468 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Sep 12 17:13:49.890535 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Sep 12 17:13:49.890617 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Sep 12 17:13:49.890684 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Sep 12 17:13:49.890759 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Sep 12 17:13:49.890828 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Sep 12 17:13:49.890924 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Sep 12 17:13:49.890995 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Sep 12 17:13:49.891063 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Sep 12 17:13:49.891164 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Sep 12 17:13:49.892394 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Sep 12 17:13:49.892475 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Sep 12 17:13:49.892543 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Sep 12 17:13:49.892613 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Sep 12 17:13:49.892679 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Sep 12 17:13:49.892749 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Sep 12 17:13:49.892818 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Sep 12 17:13:49.892925 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Sep 12 17:13:49.893000 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Sep 12 17:13:49.893070 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Sep 12 17:13:49.893133 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Sep 12 17:13:49.893198 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Sep 12 17:13:49.894748 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Sep 12 17:13:49.894817 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Sep 12 17:13:49.894925 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Sep 12 17:13:49.895005 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Sep 12 17:13:49.895071 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Sep 12 17:13:49.895133 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Sep 12 17:13:49.895238 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Sep 12 17:13:49.895313 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Sep 12 17:13:49.895385 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Sep 12 17:13:49.895456 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 12 17:13:49.895522 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Sep 12 17:13:49.895586 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Sep 12 17:13:49.895654 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 12 17:13:49.895716 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Sep 12 17:13:49.895779 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Sep 12 17:13:49.895849 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 12 17:13:49.895932 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Sep 12 17:13:49.895999 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Sep 12 17:13:49.896066 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Sep 12 17:13:49.896129 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Sep 12 17:13:49.896195 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Sep 12 17:13:49.898455 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Sep 12 17:13:49.898569 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Sep 12 17:13:49.898640 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Sep 12 17:13:49.898708 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Sep 12 17:13:49.898773 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Sep 12 17:13:49.898842 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Sep 12 17:13:49.898965 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Sep 12 17:13:49.899044 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Sep 12 17:13:49.899117 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 12 17:13:49.899187 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Sep 12 17:13:49.899318 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 12 17:13:49.899398 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Sep 12 17:13:49.899462 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 12 17:13:49.899528 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Sep 12 17:13:49.899597 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Sep 12 17:13:49.899668 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Sep 12 17:13:49.899734 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Sep 12 17:13:49.899801 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Sep 12 17:13:49.899867 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Sep 12 17:13:49.899951 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Sep 12 17:13:49.900015 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Sep 12 17:13:49.900080 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Sep 12 17:13:49.900148 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Sep 12 17:13:49.901389 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Sep 12 17:13:49.902443 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Sep 12 17:13:49.902528 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Sep 12 17:13:49.902593 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Sep 12 17:13:49.902660 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Sep 12 17:13:49.902724 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Sep 12 17:13:49.902792 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Sep 12 17:13:49.902862 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Sep 12 17:13:49.902953 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Sep 12 17:13:49.903019 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Sep 12 17:13:49.903090 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Sep 12 17:13:49.903159 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Sep 12 17:13:49.903268 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Sep 12 17:13:49.903347 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Sep 12 17:13:49.903417 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 12 17:13:49.903489 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Sep 12 17:13:49.903557 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Sep 12 17:13:49.903622 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Sep 12 17:13:49.903686 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Sep 12 17:13:49.903750 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Sep 12 17:13:49.903821 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Sep 12 17:13:49.903933 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Sep 12 17:13:49.904015 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Sep 12 17:13:49.904080 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Sep 12 17:13:49.904144 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Sep 12 17:13:49.905675 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Sep 12 17:13:49.905777 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Sep 12 17:13:49.905854 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Sep 12 17:13:49.905981 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Sep 12 17:13:49.906051 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Sep 12 17:13:49.906114 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Sep 12 17:13:49.906186 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Sep 12 17:13:49.907352 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Sep 12 17:13:49.907433 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Sep 12 17:13:49.907498 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Sep 12 17:13:49.907568 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Sep 12 17:13:49.907643 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Sep 12 17:13:49.907711 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Sep 12 17:13:49.907776 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Sep 12 17:13:49.907840 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Sep 12 17:13:49.907926 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Sep 12 17:13:49.908029 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Sep 12 17:13:49.908128 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Sep 12 17:13:49.908253 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Sep 12 17:13:49.908326 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Sep 12 17:13:49.908387 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Sep 12 17:13:49.908450 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 12 17:13:49.908520 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Sep 12 17:13:49.908585 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Sep 12 17:13:49.908649 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Sep 12 17:13:49.908718 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Sep 12 17:13:49.908786 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Sep 12 17:13:49.908850 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Sep 12 17:13:49.908969 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 12 17:13:49.909043 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Sep 12 17:13:49.909108 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Sep 12 17:13:49.909174 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Sep 12 17:13:49.909270 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 12 17:13:49.909345 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Sep 12 17:13:49.909419 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Sep 12 17:13:49.909482 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Sep 12 17:13:49.909545 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Sep 12 17:13:49.909613 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 12 17:13:49.909672 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 12 17:13:49.909730 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 12 17:13:49.909812 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Sep 12 17:13:49.909888 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Sep 12 17:13:49.909953 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Sep 12 17:13:49.910023 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Sep 12 17:13:49.910083 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Sep 12 17:13:49.910147 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Sep 12 17:13:49.910325 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Sep 12 17:13:49.910394 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Sep 12 17:13:49.910458 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Sep 12 17:13:49.910534 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Sep 12 17:13:49.910593 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Sep 12 17:13:49.910651 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Sep 12 17:13:49.910718 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Sep 12 17:13:49.910778 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Sep 12 17:13:49.910838 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Sep 12 17:13:49.910958 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Sep 12 17:13:49.911025 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Sep 12 17:13:49.911089 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 12 17:13:49.911157 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Sep 12 17:13:49.911254 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Sep 12 17:13:49.911317 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 12 17:13:49.911388 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Sep 12 17:13:49.911454 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Sep 12 17:13:49.911514 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 12 17:13:49.911582 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Sep 12 17:13:49.911646 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Sep 12 17:13:49.911706 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Sep 12 17:13:49.911716 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 12 17:13:49.911724 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 12 17:13:49.911732 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 12 17:13:49.911740 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 12 17:13:49.911748 kernel: iommu: Default domain type: Translated Sep 12 17:13:49.911755 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 12 17:13:49.911763 kernel: efivars: Registered efivars operations Sep 12 17:13:49.911773 kernel: vgaarb: loaded Sep 12 17:13:49.911781 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 12 17:13:49.911788 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:13:49.911796 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:13:49.911804 kernel: pnp: PnP ACPI init Sep 12 17:13:49.911897 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 12 17:13:49.911909 kernel: pnp: PnP ACPI: found 1 devices Sep 12 17:13:49.911917 kernel: NET: Registered PF_INET protocol family Sep 12 17:13:49.911928 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 17:13:49.911936 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 17:13:49.911944 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:13:49.911952 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:13:49.911959 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 17:13:49.911967 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 17:13:49.911975 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:13:49.911982 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:13:49.911991 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:13:49.912071 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Sep 12 17:13:49.912082 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:13:49.912090 kernel: kvm [1]: HYP mode not available Sep 12 17:13:49.912098 kernel: Initialise system trusted keyrings Sep 12 17:13:49.912105 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 17:13:49.912113 kernel: Key type asymmetric registered Sep 12 17:13:49.912121 kernel: Asymmetric key parser 'x509' registered Sep 12 17:13:49.912129 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 17:13:49.912136 kernel: io scheduler mq-deadline registered Sep 12 17:13:49.912148 kernel: io scheduler kyber registered Sep 12 17:13:49.912156 kernel: io scheduler bfq registered Sep 12 17:13:49.912164 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 12 17:13:49.912315 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Sep 12 17:13:49.912385 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Sep 12 17:13:49.912447 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 17:13:49.912516 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Sep 12 17:13:49.912587 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Sep 12 17:13:49.912680 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 17:13:49.912751 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Sep 12 17:13:49.912816 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Sep 12 17:13:49.912893 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 17:13:49.912964 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Sep 12 17:13:49.913037 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Sep 12 17:13:49.913100 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 17:13:49.913167 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Sep 12 17:13:49.913359 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Sep 12 17:13:49.913433 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 17:13:49.913503 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Sep 12 17:13:49.913575 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Sep 12 17:13:49.913640 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 17:13:49.913709 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Sep 12 17:13:49.913776 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Sep 12 17:13:49.913843 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 17:13:49.913966 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Sep 12 17:13:49.914047 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Sep 12 17:13:49.914112 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 17:13:49.914123 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Sep 12 17:13:49.914190 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Sep 12 17:13:49.914277 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Sep 12 17:13:49.914345 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 17:13:49.914373 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 12 17:13:49.914381 kernel: ACPI: button: Power Button [PWRB] Sep 12 17:13:49.914389 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 12 17:13:49.914466 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Sep 12 17:13:49.914540 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Sep 12 17:13:49.914551 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:13:49.914558 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 12 17:13:49.914646 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Sep 12 17:13:49.914657 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Sep 12 17:13:49.914667 kernel: thunder_xcv, ver 1.0 Sep 12 17:13:49.914675 kernel: thunder_bgx, ver 1.0 Sep 12 17:13:49.914682 kernel: nicpf, ver 1.0 Sep 12 17:13:49.914690 kernel: nicvf, ver 1.0 Sep 12 17:13:49.914771 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 12 17:13:49.914832 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-12T17:13:49 UTC (1757697229) Sep 12 17:13:49.914842 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 17:13:49.914850 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 12 17:13:49.914860 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 12 17:13:49.914868 kernel: watchdog: Hard watchdog permanently disabled Sep 12 17:13:49.914887 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:13:49.914896 kernel: Segment Routing with IPv6 Sep 12 17:13:49.914904 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:13:49.914911 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:13:49.914919 kernel: Key type dns_resolver registered Sep 12 17:13:49.914927 kernel: registered taskstats version 1 Sep 12 17:13:49.914935 kernel: Loading compiled-in X.509 certificates Sep 12 17:13:49.914946 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: d6f11852774cea54e4c26b4ad4f8effa8d89e628' Sep 12 17:13:49.914953 kernel: Key type .fscrypt registered Sep 12 17:13:49.914961 kernel: Key type fscrypt-provisioning registered Sep 12 17:13:49.914969 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:13:49.914977 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:13:49.914984 kernel: ima: No architecture policies found Sep 12 17:13:49.914992 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 12 17:13:49.915000 kernel: clk: Disabling unused clocks Sep 12 17:13:49.915007 kernel: Freeing unused kernel memory: 38400K Sep 12 17:13:49.915017 kernel: Run /init as init process Sep 12 17:13:49.915028 kernel: with arguments: Sep 12 17:13:49.915037 kernel: /init Sep 12 17:13:49.915045 kernel: with environment: Sep 12 17:13:49.915053 kernel: HOME=/ Sep 12 17:13:49.915061 kernel: TERM=linux Sep 12 17:13:49.915068 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:13:49.915078 systemd[1]: Successfully made /usr/ read-only. Sep 12 17:13:49.915090 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 17:13:49.915101 systemd[1]: Detected virtualization kvm. Sep 12 17:13:49.915109 systemd[1]: Detected architecture arm64. Sep 12 17:13:49.915117 systemd[1]: Running in initrd. Sep 12 17:13:49.915125 systemd[1]: No hostname configured, using default hostname. Sep 12 17:13:49.915133 systemd[1]: Hostname set to . Sep 12 17:13:49.915142 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:13:49.915150 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:13:49.915160 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:13:49.915169 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:13:49.915177 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:13:49.915186 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:13:49.915194 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:13:49.915237 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:13:49.915251 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:13:49.915263 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:13:49.915271 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:13:49.915279 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:13:49.915287 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:13:49.915295 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:13:49.915304 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:13:49.915312 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:13:49.915321 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:13:49.915331 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:13:49.915339 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:13:49.915347 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 17:13:49.915356 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:13:49.915364 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:13:49.915372 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:13:49.915380 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:13:49.915389 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:13:49.915397 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:13:49.915407 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:13:49.915415 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:13:49.915424 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:13:49.915432 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:13:49.915440 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:13:49.915448 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:13:49.915456 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:13:49.915498 systemd-journald[237]: Collecting audit messages is disabled. Sep 12 17:13:49.915522 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:13:49.915531 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:13:49.915539 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:13:49.915547 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:13:49.915555 kernel: Bridge firewalling registered Sep 12 17:13:49.915563 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:13:49.915571 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:13:49.915580 systemd-journald[237]: Journal started Sep 12 17:13:49.915601 systemd-journald[237]: Runtime Journal (/run/log/journal/b55e32d20dc343b4adb9d485e5a93a5e) is 8M, max 76.6M, 68.6M free. Sep 12 17:13:49.887134 systemd-modules-load[238]: Inserted module 'overlay' Sep 12 17:13:49.908481 systemd-modules-load[238]: Inserted module 'br_netfilter' Sep 12 17:13:49.923025 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:13:49.923049 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:13:49.932856 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:13:49.938069 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:13:49.942920 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:13:49.949070 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:13:49.950633 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:13:49.958496 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:13:49.961632 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:13:49.965563 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:13:49.976453 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:13:49.978851 dracut-cmdline[272]: dracut-dracut-053 Sep 12 17:13:49.981515 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=82b413d7549dba6b35b1edf421a17f61aa80704059d10fedd611b1eff5298abd Sep 12 17:13:50.017347 systemd-resolved[276]: Positive Trust Anchors: Sep 12 17:13:50.017364 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:13:50.017397 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:13:50.029410 systemd-resolved[276]: Defaulting to hostname 'linux'. Sep 12 17:13:50.030619 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:13:50.031391 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:13:50.073300 kernel: SCSI subsystem initialized Sep 12 17:13:50.078264 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:13:50.086391 kernel: iscsi: registered transport (tcp) Sep 12 17:13:50.100444 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:13:50.100525 kernel: QLogic iSCSI HBA Driver Sep 12 17:13:50.152624 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:13:50.158492 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:13:50.180280 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:13:50.180383 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:13:50.180406 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 17:13:50.232285 kernel: raid6: neonx8 gen() 15716 MB/s Sep 12 17:13:50.249263 kernel: raid6: neonx4 gen() 15738 MB/s Sep 12 17:13:50.266282 kernel: raid6: neonx2 gen() 13174 MB/s Sep 12 17:13:50.283283 kernel: raid6: neonx1 gen() 10463 MB/s Sep 12 17:13:50.300267 kernel: raid6: int64x8 gen() 6750 MB/s Sep 12 17:13:50.317242 kernel: raid6: int64x4 gen() 7313 MB/s Sep 12 17:13:50.334257 kernel: raid6: int64x2 gen() 6080 MB/s Sep 12 17:13:50.351263 kernel: raid6: int64x1 gen() 5031 MB/s Sep 12 17:13:50.351342 kernel: raid6: using algorithm neonx4 gen() 15738 MB/s Sep 12 17:13:50.368266 kernel: raid6: .... xor() 12374 MB/s, rmw enabled Sep 12 17:13:50.368344 kernel: raid6: using neon recovery algorithm Sep 12 17:13:50.373528 kernel: xor: measuring software checksum speed Sep 12 17:13:50.373592 kernel: 8regs : 21647 MB/sec Sep 12 17:13:50.373623 kernel: 32regs : 21710 MB/sec Sep 12 17:13:50.373641 kernel: arm64_neon : 27141 MB/sec Sep 12 17:13:50.374242 kernel: xor: using function: arm64_neon (27141 MB/sec) Sep 12 17:13:50.425291 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:13:50.439593 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:13:50.446559 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:13:50.463594 systemd-udevd[458]: Using default interface naming scheme 'v255'. Sep 12 17:13:50.467688 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:13:50.476522 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:13:50.495537 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Sep 12 17:13:50.533481 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:13:50.542504 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:13:50.594966 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:13:50.602726 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:13:50.632086 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:13:50.634704 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:13:50.635692 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:13:50.637988 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:13:50.646573 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:13:50.659602 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:13:50.715016 kernel: scsi host0: Virtio SCSI HBA Sep 12 17:13:50.720355 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 12 17:13:50.720467 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Sep 12 17:13:50.730840 kernel: ACPI: bus type USB registered Sep 12 17:13:50.730916 kernel: usbcore: registered new interface driver usbfs Sep 12 17:13:50.732625 kernel: usbcore: registered new interface driver hub Sep 12 17:13:50.735237 kernel: usbcore: registered new device driver usb Sep 12 17:13:50.746020 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:13:50.746158 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:13:50.748739 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:13:50.749438 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:13:50.749601 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:13:50.750381 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:13:50.758492 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:13:50.766805 kernel: sr 0:0:0:0: Power-on or device reset occurred Sep 12 17:13:50.767033 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Sep 12 17:13:50.767148 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 17:13:50.770257 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Sep 12 17:13:50.786464 kernel: sd 0:0:0:1: Power-on or device reset occurred Sep 12 17:13:50.786666 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Sep 12 17:13:50.786751 kernel: sd 0:0:0:1: [sda] Write Protect is off Sep 12 17:13:50.786839 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Sep 12 17:13:50.786977 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 12 17:13:50.787890 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 12 17:13:50.788054 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:13:50.790229 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Sep 12 17:13:50.791844 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Sep 12 17:13:50.793690 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 12 17:13:50.793863 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Sep 12 17:13:50.793995 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 17:13:50.794835 kernel: GPT:17805311 != 80003071 Sep 12 17:13:50.794876 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Sep 12 17:13:50.795018 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 17:13:50.796263 kernel: hub 1-0:1.0: USB hub found Sep 12 17:13:50.798418 kernel: hub 1-0:1.0: 4 ports detected Sep 12 17:13:50.798740 kernel: GPT:17805311 != 80003071 Sep 12 17:13:50.798765 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:13:50.798788 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:13:50.798809 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Sep 12 17:13:50.803507 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:13:50.813434 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Sep 12 17:13:50.813658 kernel: hub 2-0:1.0: USB hub found Sep 12 17:13:50.814484 kernel: hub 2-0:1.0: 4 ports detected Sep 12 17:13:50.827798 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:13:50.893256 kernel: BTRFS: device fsid 402ea12e-53e0-48e3-8f03-9fb2de6b0089 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (515) Sep 12 17:13:50.897298 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by (udev-worker) (505) Sep 12 17:13:50.904493 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Sep 12 17:13:50.912769 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Sep 12 17:13:50.931523 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Sep 12 17:13:50.932258 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Sep 12 17:13:50.941398 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 12 17:13:50.954559 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:13:51.052284 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Sep 12 17:13:51.103455 disk-uuid[576]: Primary Header is updated. Sep 12 17:13:51.103455 disk-uuid[576]: Secondary Entries is updated. Sep 12 17:13:51.103455 disk-uuid[576]: Secondary Header is updated. Sep 12 17:13:51.113234 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:13:51.189245 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Sep 12 17:13:51.189334 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Sep 12 17:13:51.191249 kernel: usbcore: registered new interface driver usbhid Sep 12 17:13:51.192270 kernel: usbhid: USB HID core driver Sep 12 17:13:51.298228 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Sep 12 17:13:51.426256 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Sep 12 17:13:51.480262 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Sep 12 17:13:52.170290 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:13:52.172169 disk-uuid[577]: The operation has completed successfully. Sep 12 17:13:52.244258 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:13:52.244360 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:13:52.272527 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:13:52.291002 sh[592]: Success Sep 12 17:13:52.309267 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 12 17:13:52.381848 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:13:52.393722 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:13:52.395249 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:13:52.421244 kernel: BTRFS info (device dm-0): first mount of filesystem 402ea12e-53e0-48e3-8f03-9fb2de6b0089 Sep 12 17:13:52.421335 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:13:52.421360 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 17:13:52.421382 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:13:52.421804 kernel: BTRFS info (device dm-0): using free space tree Sep 12 17:13:52.430262 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 12 17:13:52.433013 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:13:52.435929 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:13:52.450578 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:13:52.453422 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:13:52.478881 kernel: BTRFS info (device sda6): first mount of filesystem 903d50e4-a739-43b7-a8ad-24da5524f9bc Sep 12 17:13:52.478945 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:13:52.478957 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:13:52.486244 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 17:13:52.486322 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:13:52.493228 kernel: BTRFS info (device sda6): last unmount of filesystem 903d50e4-a739-43b7-a8ad-24da5524f9bc Sep 12 17:13:52.560037 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:13:52.568513 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:13:52.586658 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:13:52.595509 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:13:52.602742 systemd-networkd[768]: lo: Link UP Sep 12 17:13:52.602758 systemd-networkd[768]: lo: Gained carrier Sep 12 17:13:52.604693 systemd-networkd[768]: Enumeration completed Sep 12 17:13:52.604915 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:13:52.606003 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:13:52.606007 systemd-networkd[768]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:13:52.606779 systemd-networkd[768]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:13:52.606783 systemd-networkd[768]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:13:52.607700 systemd-networkd[768]: eth0: Link UP Sep 12 17:13:52.607704 systemd-networkd[768]: eth0: Gained carrier Sep 12 17:13:52.607713 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:13:52.608311 systemd[1]: Reached target network.target - Network. Sep 12 17:13:52.615835 systemd-networkd[768]: eth1: Link UP Sep 12 17:13:52.615840 systemd-networkd[768]: eth1: Gained carrier Sep 12 17:13:52.615865 systemd-networkd[768]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:13:52.658363 systemd-networkd[768]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 12 17:13:52.676345 systemd-networkd[768]: eth0: DHCPv4 address 168.119.179.98/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 12 17:13:52.708433 ignition[771]: Ignition 2.20.0 Sep 12 17:13:52.708445 ignition[771]: Stage: fetch-offline Sep 12 17:13:52.708496 ignition[771]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:13:52.711713 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:13:52.708505 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 12 17:13:52.708678 ignition[771]: parsed url from cmdline: "" Sep 12 17:13:52.708682 ignition[771]: no config URL provided Sep 12 17:13:52.708687 ignition[771]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:13:52.708694 ignition[771]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:13:52.708700 ignition[771]: failed to fetch config: resource requires networking Sep 12 17:13:52.708972 ignition[771]: Ignition finished successfully Sep 12 17:13:52.719489 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 17:13:52.734802 ignition[780]: Ignition 2.20.0 Sep 12 17:13:52.734816 ignition[780]: Stage: fetch Sep 12 17:13:52.735063 ignition[780]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:13:52.735077 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 12 17:13:52.735194 ignition[780]: parsed url from cmdline: "" Sep 12 17:13:52.735198 ignition[780]: no config URL provided Sep 12 17:13:52.735240 ignition[780]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:13:52.735262 ignition[780]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:13:52.735368 ignition[780]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Sep 12 17:13:52.739943 ignition[780]: GET result: OK Sep 12 17:13:52.740020 ignition[780]: parsing config with SHA512: 275e1921e4a5a2d20262d93c07cc32a371169c33323ee2b38d03c1a78fb70a88afece1b8cb01b49d01074aa60a0a9d340d8962ddedd8de86f287a53f3951666f Sep 12 17:13:52.746053 unknown[780]: fetched base config from "system" Sep 12 17:13:52.746064 unknown[780]: fetched base config from "system" Sep 12 17:13:52.746488 ignition[780]: fetch: fetch complete Sep 12 17:13:52.746069 unknown[780]: fetched user config from "hetzner" Sep 12 17:13:52.746493 ignition[780]: fetch: fetch passed Sep 12 17:13:52.753306 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 17:13:52.746540 ignition[780]: Ignition finished successfully Sep 12 17:13:52.767608 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:13:52.787452 ignition[787]: Ignition 2.20.0 Sep 12 17:13:52.788197 ignition[787]: Stage: kargs Sep 12 17:13:52.788447 ignition[787]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:13:52.788460 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 12 17:13:52.789574 ignition[787]: kargs: kargs passed Sep 12 17:13:52.792056 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:13:52.789641 ignition[787]: Ignition finished successfully Sep 12 17:13:52.798489 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:13:52.811835 ignition[794]: Ignition 2.20.0 Sep 12 17:13:52.811867 ignition[794]: Stage: disks Sep 12 17:13:52.812087 ignition[794]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:13:52.812098 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 12 17:13:52.815075 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:13:52.813174 ignition[794]: disks: disks passed Sep 12 17:13:52.813265 ignition[794]: Ignition finished successfully Sep 12 17:13:52.817295 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:13:52.818296 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:13:52.819635 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:13:52.820703 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:13:52.821769 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:13:52.830561 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:13:52.851298 systemd-fsck[803]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 12 17:13:52.921337 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:13:52.932458 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:13:53.062877 kernel: EXT4-fs (sda9): mounted filesystem 397cbf4d-cf5b-4786-906a-df7c3e18edd9 r/w with ordered data mode. Quota mode: none. Sep 12 17:13:53.064486 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:13:53.066666 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:13:53.074395 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:13:53.078414 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:13:53.083934 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 12 17:13:53.086448 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:13:53.086501 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:13:53.095884 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:13:53.103010 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:13:53.113264 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (811) Sep 12 17:13:53.115680 kernel: BTRFS info (device sda6): first mount of filesystem 903d50e4-a739-43b7-a8ad-24da5524f9bc Sep 12 17:13:53.115739 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:13:53.115752 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:13:53.124469 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 17:13:53.124567 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:13:53.128807 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:13:53.176006 coreos-metadata[813]: Sep 12 17:13:53.175 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Sep 12 17:13:53.178655 coreos-metadata[813]: Sep 12 17:13:53.178 INFO Fetch successful Sep 12 17:13:53.179939 coreos-metadata[813]: Sep 12 17:13:53.179 INFO wrote hostname ci-4230-2-3-6-9297726d8a to /sysroot/etc/hostname Sep 12 17:13:53.182398 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:13:53.186737 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 17:13:53.193802 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:13:53.201907 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:13:53.207176 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:13:53.328318 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:13:53.335389 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:13:53.341393 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:13:53.351355 kernel: BTRFS info (device sda6): last unmount of filesystem 903d50e4-a739-43b7-a8ad-24da5524f9bc Sep 12 17:13:53.378943 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:13:53.383375 ignition[930]: INFO : Ignition 2.20.0 Sep 12 17:13:53.383375 ignition[930]: INFO : Stage: mount Sep 12 17:13:53.383375 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:13:53.383375 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 12 17:13:53.383375 ignition[930]: INFO : mount: mount passed Sep 12 17:13:53.383375 ignition[930]: INFO : Ignition finished successfully Sep 12 17:13:53.384768 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:13:53.393408 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:13:53.421158 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:13:53.433459 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:13:53.450255 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (941) Sep 12 17:13:53.452800 kernel: BTRFS info (device sda6): first mount of filesystem 903d50e4-a739-43b7-a8ad-24da5524f9bc Sep 12 17:13:53.452890 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:13:53.452933 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:13:53.456515 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 17:13:53.456609 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:13:53.460546 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:13:53.483544 ignition[958]: INFO : Ignition 2.20.0 Sep 12 17:13:53.483544 ignition[958]: INFO : Stage: files Sep 12 17:13:53.484932 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:13:53.484932 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 12 17:13:53.484932 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:13:53.488054 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:13:53.488054 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:13:53.491269 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:13:53.492612 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:13:53.494123 unknown[958]: wrote ssh authorized keys file for user: core Sep 12 17:13:53.495135 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:13:53.496128 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 12 17:13:53.497331 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 12 17:13:53.597093 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:13:53.624401 systemd-networkd[768]: eth1: Gained IPv6LL Sep 12 17:13:53.944675 systemd-networkd[768]: eth0: Gained IPv6LL Sep 12 17:13:54.054655 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 12 17:13:54.054655 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 17:13:54.054655 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 12 17:13:54.277475 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 17:13:54.507144 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 17:13:54.507144 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:13:54.507144 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:13:54.507144 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:13:54.507144 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:13:54.507144 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:13:54.507144 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:13:54.507144 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:13:54.521241 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:13:54.521241 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:13:54.521241 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:13:54.521241 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 17:13:54.521241 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 17:13:54.521241 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 17:13:54.521241 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 12 17:13:54.729497 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 17:13:55.464168 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 17:13:55.464168 ignition[958]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 12 17:13:55.467286 ignition[958]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:13:55.467286 ignition[958]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:13:55.467286 ignition[958]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 12 17:13:55.467286 ignition[958]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 12 17:13:55.467286 ignition[958]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 12 17:13:55.467286 ignition[958]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 12 17:13:55.467286 ignition[958]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 12 17:13:55.467286 ignition[958]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:13:55.467286 ignition[958]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:13:55.467286 ignition[958]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:13:55.478265 ignition[958]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:13:55.478265 ignition[958]: INFO : files: files passed Sep 12 17:13:55.478265 ignition[958]: INFO : Ignition finished successfully Sep 12 17:13:55.470612 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:13:55.478156 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:13:55.479726 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:13:55.490039 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:13:55.490163 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:13:55.498348 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:13:55.498348 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:13:55.502265 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:13:55.505781 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:13:55.507275 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:13:55.510432 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:13:55.554651 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:13:55.554844 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:13:55.557923 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:13:55.560008 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:13:55.562030 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:13:55.571493 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:13:55.590236 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:13:55.595498 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:13:55.609626 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:13:55.610441 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:13:55.611707 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:13:55.612888 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:13:55.613042 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:13:55.614554 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:13:55.615942 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:13:55.617160 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:13:55.618177 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:13:55.619396 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:13:55.620611 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:13:55.621711 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:13:55.622946 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:13:55.624120 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:13:55.625184 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:13:55.626141 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:13:55.626305 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:13:55.627654 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:13:55.628342 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:13:55.629485 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:13:55.629581 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:13:55.630832 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:13:55.630971 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:13:55.632502 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:13:55.632623 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:13:55.634035 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:13:55.634134 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:13:55.635043 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 12 17:13:55.635144 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 17:13:55.642487 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:13:55.647231 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:13:55.647771 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:13:55.647925 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:13:55.651572 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:13:55.651718 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:13:55.663098 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:13:55.663250 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:13:55.673267 ignition[1011]: INFO : Ignition 2.20.0 Sep 12 17:13:55.673267 ignition[1011]: INFO : Stage: umount Sep 12 17:13:55.675486 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:13:55.675486 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 12 17:13:55.675486 ignition[1011]: INFO : umount: umount passed Sep 12 17:13:55.675486 ignition[1011]: INFO : Ignition finished successfully Sep 12 17:13:55.674622 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:13:55.677841 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:13:55.677987 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:13:55.680636 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:13:55.680700 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:13:55.684114 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:13:55.684251 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:13:55.685871 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 17:13:55.685940 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 17:13:55.686840 systemd[1]: Stopped target network.target - Network. Sep 12 17:13:55.687730 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:13:55.687783 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:13:55.688960 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:13:55.689777 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:13:55.695318 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:13:55.696701 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:13:55.699188 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:13:55.700296 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:13:55.700354 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:13:55.701556 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:13:55.701598 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:13:55.702609 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:13:55.702674 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:13:55.703792 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:13:55.704084 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:13:55.705014 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:13:55.708633 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:13:55.710245 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:13:55.711912 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:13:55.714176 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:13:55.714406 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:13:55.718523 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 17:13:55.720414 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:13:55.720517 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:13:55.721875 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:13:55.721926 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:13:55.725449 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:13:55.725762 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:13:55.725904 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:13:55.729091 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 17:13:55.729901 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:13:55.729970 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:13:55.736504 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:13:55.737070 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:13:55.737162 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:13:55.739785 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:13:55.739877 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:13:55.742396 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:13:55.742464 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:13:55.743605 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:13:55.748337 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 17:13:55.759642 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:13:55.759909 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:13:55.764092 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:13:55.764297 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:13:55.766243 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:13:55.766305 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:13:55.768289 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:13:55.768331 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:13:55.769926 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:13:55.769980 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:13:55.771444 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:13:55.771489 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:13:55.772901 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:13:55.772948 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:13:55.784445 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:13:55.785406 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:13:55.785505 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:13:55.790047 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 17:13:55.790115 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:13:55.792024 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:13:55.792075 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:13:55.792880 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:13:55.792922 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:13:55.796541 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:13:55.796679 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:13:55.797980 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:13:55.803550 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:13:55.816022 systemd[1]: Switching root. Sep 12 17:13:55.852028 systemd-journald[237]: Journal stopped Sep 12 17:13:56.888672 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Sep 12 17:13:56.888757 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:13:56.888770 kernel: SELinux: policy capability open_perms=1 Sep 12 17:13:56.888791 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:13:56.888805 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:13:56.888866 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:13:56.888878 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:13:56.888888 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:13:56.888898 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:13:56.888907 kernel: audit: type=1403 audit(1757697236.012:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:13:56.888926 systemd[1]: Successfully loaded SELinux policy in 39.595ms. Sep 12 17:13:56.888943 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.023ms. Sep 12 17:13:56.888955 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 17:13:56.888971 systemd[1]: Detected virtualization kvm. Sep 12 17:13:56.888982 systemd[1]: Detected architecture arm64. Sep 12 17:13:56.888992 systemd[1]: Detected first boot. Sep 12 17:13:56.889002 systemd[1]: Hostname set to . Sep 12 17:13:56.889011 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:13:56.889029 zram_generator::config[1055]: No configuration found. Sep 12 17:13:56.889041 kernel: NET: Registered PF_VSOCK protocol family Sep 12 17:13:56.889051 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:13:56.889062 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 17:13:56.889072 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 17:13:56.889083 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 17:13:56.889093 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 17:13:56.889103 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:13:56.889114 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:13:56.889125 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:13:56.889135 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:13:56.889145 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:13:56.889155 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:13:56.889165 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:13:56.889176 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:13:56.889186 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:13:56.889196 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:13:56.890172 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:13:56.890196 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:13:56.890231 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:13:56.890243 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:13:56.890255 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 12 17:13:56.890265 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:13:56.890281 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 17:13:56.890294 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 17:13:56.890305 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 17:13:56.890316 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:13:56.890326 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:13:56.890343 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:13:56.890354 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:13:56.890364 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:13:56.890375 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:13:56.890387 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:13:56.890397 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 17:13:56.890408 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:13:56.890418 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:13:56.890428 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:13:56.890440 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:13:56.890450 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:13:56.890461 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:13:56.890476 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:13:56.890649 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:13:56.890668 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:13:56.890679 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:13:56.890692 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:13:56.890702 systemd[1]: Reached target machines.target - Containers. Sep 12 17:13:56.890712 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:13:56.890726 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:13:56.890737 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:13:56.890748 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:13:56.890759 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:13:56.890769 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:13:56.890780 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:13:56.890790 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:13:56.890800 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:13:56.890826 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:13:56.890840 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 17:13:56.890854 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 17:13:56.891255 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 17:13:56.891276 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 17:13:56.891289 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:13:56.891300 kernel: fuse: init (API version 7.39) Sep 12 17:13:56.891311 kernel: ACPI: bus type drm_connector registered Sep 12 17:13:56.891327 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:13:56.891337 kernel: loop: module loaded Sep 12 17:13:56.891347 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:13:56.891359 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:13:56.891369 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:13:56.891380 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 17:13:56.891393 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:13:56.891404 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 17:13:56.891414 systemd[1]: Stopped verity-setup.service. Sep 12 17:13:56.891424 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:13:56.891435 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:13:56.891445 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:13:56.891455 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:13:56.891466 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:13:56.891478 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:13:56.891489 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:13:56.891499 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:13:56.891510 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:13:56.891521 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:13:56.891531 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:13:56.891587 systemd-journald[1123]: Collecting audit messages is disabled. Sep 12 17:13:56.891621 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:13:56.891632 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:13:56.891642 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:13:56.891654 systemd-journald[1123]: Journal started Sep 12 17:13:56.891678 systemd-journald[1123]: Runtime Journal (/run/log/journal/b55e32d20dc343b4adb9d485e5a93a5e) is 8M, max 76.6M, 68.6M free. Sep 12 17:13:56.601871 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:13:56.613867 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 12 17:13:56.614447 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 17:13:56.897286 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:13:56.897370 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:13:56.898078 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:13:56.899064 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:13:56.905086 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:13:56.905433 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:13:56.907009 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:13:56.908329 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:13:56.910606 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:13:56.912115 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 17:13:56.918595 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:13:56.929188 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:13:56.938383 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:13:56.944486 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:13:56.945639 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:13:56.945692 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:13:56.947679 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 17:13:56.951503 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:13:56.956436 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:13:56.957245 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:13:56.960419 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:13:56.966094 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:13:56.968360 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:13:56.973659 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:13:56.974410 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:13:56.978110 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:13:56.983346 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:13:56.988480 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:13:56.992026 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:13:56.994634 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:13:56.997242 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:13:57.025584 systemd-journald[1123]: Time spent on flushing to /var/log/journal/b55e32d20dc343b4adb9d485e5a93a5e is 104.654ms for 1137 entries. Sep 12 17:13:57.025584 systemd-journald[1123]: System Journal (/var/log/journal/b55e32d20dc343b4adb9d485e5a93a5e) is 8M, max 584.8M, 576.8M free. Sep 12 17:13:57.155751 systemd-journald[1123]: Received client request to flush runtime journal. Sep 12 17:13:57.155823 kernel: loop0: detected capacity change from 0 to 123192 Sep 12 17:13:57.155851 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:13:57.155869 kernel: loop1: detected capacity change from 0 to 113512 Sep 12 17:13:57.043345 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:13:57.047969 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:13:57.069079 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 17:13:57.077058 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:13:57.092872 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 17:13:57.096518 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Sep 12 17:13:57.096529 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Sep 12 17:13:57.096861 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:13:57.112343 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:13:57.122591 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:13:57.155367 udevadm[1187]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 12 17:13:57.165610 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 17:13:57.173011 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:13:57.200384 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:13:57.206435 kernel: loop2: detected capacity change from 0 to 8 Sep 12 17:13:57.215971 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:13:57.230767 kernel: loop3: detected capacity change from 0 to 211168 Sep 12 17:13:57.251193 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Sep 12 17:13:57.251238 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Sep 12 17:13:57.268368 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:13:57.296288 kernel: loop4: detected capacity change from 0 to 123192 Sep 12 17:13:57.316390 kernel: loop5: detected capacity change from 0 to 113512 Sep 12 17:13:57.341672 kernel: loop6: detected capacity change from 0 to 8 Sep 12 17:13:57.341867 kernel: loop7: detected capacity change from 0 to 211168 Sep 12 17:13:57.380576 (sd-merge)[1203]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Sep 12 17:13:57.381748 (sd-merge)[1203]: Merged extensions into '/usr'. Sep 12 17:13:57.389400 systemd[1]: Reload requested from client PID 1175 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:13:57.389557 systemd[1]: Reloading... Sep 12 17:13:57.517382 zram_generator::config[1231]: No configuration found. Sep 12 17:13:57.715383 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:13:57.737257 ldconfig[1170]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:13:57.778186 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:13:57.778674 systemd[1]: Reloading finished in 388 ms. Sep 12 17:13:57.797918 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:13:57.799572 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:13:57.813573 systemd[1]: Starting ensure-sysext.service... Sep 12 17:13:57.819690 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:13:57.843291 systemd[1]: Reload requested from client PID 1268 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:13:57.843320 systemd[1]: Reloading... Sep 12 17:13:57.870190 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:13:57.870481 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:13:57.871722 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:13:57.872261 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Sep 12 17:13:57.872317 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Sep 12 17:13:57.876533 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:13:57.876549 systemd-tmpfiles[1269]: Skipping /boot Sep 12 17:13:57.894175 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:13:57.894317 systemd-tmpfiles[1269]: Skipping /boot Sep 12 17:13:57.957252 zram_generator::config[1298]: No configuration found. Sep 12 17:13:58.064959 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:13:58.128056 systemd[1]: Reloading finished in 284 ms. Sep 12 17:13:58.143516 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:13:58.156510 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:13:58.171758 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:13:58.177641 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:13:58.187618 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:13:58.193968 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:13:58.201984 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:13:58.213746 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:13:58.220617 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:13:58.230679 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:13:58.237590 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:13:58.241353 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:13:58.242986 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:13:58.243149 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:13:58.249559 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:13:58.253669 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:13:58.255541 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:13:58.265294 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:13:58.266352 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:13:58.274137 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:13:58.286058 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:13:58.295392 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:13:58.297964 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:13:58.298130 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:13:58.299444 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:13:58.300873 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:13:58.309113 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:13:58.318722 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:13:58.319437 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:13:58.319562 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:13:58.322739 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:13:58.325030 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:13:58.325705 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:13:58.333993 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:13:58.334619 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:13:58.336907 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:13:58.337845 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:13:58.339069 augenrules[1375]: No rules Sep 12 17:13:58.349678 systemd[1]: Finished ensure-sysext.service. Sep 12 17:13:58.351850 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:13:58.352583 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:13:58.352867 systemd-udevd[1347]: Using default interface naming scheme 'v255'. Sep 12 17:13:58.353759 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:13:58.354559 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:13:58.364572 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:13:58.364663 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:13:58.374537 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 17:13:58.377759 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:13:58.395900 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:13:58.407277 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:13:58.409169 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:13:58.428526 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:13:58.429155 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:13:58.559953 systemd-networkd[1395]: lo: Link UP Sep 12 17:13:58.559964 systemd-networkd[1395]: lo: Gained carrier Sep 12 17:13:58.578769 systemd-resolved[1346]: Positive Trust Anchors: Sep 12 17:13:58.578790 systemd-resolved[1346]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:13:58.578865 systemd-resolved[1346]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:13:58.585593 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 17:13:58.586990 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:13:58.593356 systemd-resolved[1346]: Using system hostname 'ci-4230-2-3-6-9297726d8a'. Sep 12 17:13:58.597870 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:13:58.599419 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:13:58.610322 systemd-networkd[1395]: Enumeration completed Sep 12 17:13:58.610454 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:13:58.611256 systemd[1]: Reached target network.target - Network. Sep 12 17:13:58.620659 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 17:13:58.626048 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:13:58.626064 systemd-networkd[1395]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:13:58.626746 systemd-networkd[1395]: eth0: Link UP Sep 12 17:13:58.626754 systemd-networkd[1395]: eth0: Gained carrier Sep 12 17:13:58.626772 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:13:58.632386 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:13:58.672671 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 17:13:58.676984 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 12 17:13:58.690637 systemd-networkd[1395]: eth0: DHCPv4 address 168.119.179.98/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 12 17:13:58.691586 systemd-timesyncd[1385]: Network configuration changed, trying to establish connection. Sep 12 17:13:58.732914 systemd-networkd[1395]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:13:58.732929 systemd-networkd[1395]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:13:58.734449 systemd-networkd[1395]: eth1: Link UP Sep 12 17:13:58.734458 systemd-networkd[1395]: eth1: Gained carrier Sep 12 17:13:58.734478 systemd-timesyncd[1385]: Network configuration changed, trying to establish connection. Sep 12 17:13:58.734482 systemd-networkd[1395]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:13:58.741130 systemd-timesyncd[1385]: Network configuration changed, trying to establish connection. Sep 12 17:13:58.746344 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 17:13:58.771465 systemd-networkd[1395]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 12 17:13:58.772699 systemd-timesyncd[1385]: Network configuration changed, trying to establish connection. Sep 12 17:13:58.775230 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1405) Sep 12 17:13:58.872251 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Sep 12 17:13:58.872414 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 12 17:13:58.872430 kernel: [drm] features: -context_init Sep 12 17:13:58.878326 kernel: [drm] number of scanouts: 1 Sep 12 17:13:58.878420 kernel: [drm] number of cap sets: 0 Sep 12 17:13:58.879334 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 12 17:13:58.883242 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Sep 12 17:13:58.903475 kernel: Console: switching to colour frame buffer device 160x50 Sep 12 17:13:58.903424 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:13:58.927249 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 12 17:13:58.938261 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:13:58.945171 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Sep 12 17:13:58.958057 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:13:58.963516 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:13:58.977425 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:13:58.982378 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:13:58.983415 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:13:58.983570 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:13:58.985455 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:13:58.986972 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:13:58.987488 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:13:58.987681 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:13:58.990030 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:13:58.990263 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:13:58.991562 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:13:58.991741 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:13:59.000951 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:13:59.001076 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:13:59.076902 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 17:13:59.078275 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:13:59.084472 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 17:13:59.102281 lvm[1460]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:13:59.132913 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 17:13:59.135755 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:13:59.136953 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:13:59.138240 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:13:59.139506 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:13:59.141191 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:13:59.142040 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:13:59.142852 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:13:59.143569 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:13:59.143609 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:13:59.144121 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:13:59.148380 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:13:59.152290 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:13:59.156966 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 17:13:59.158564 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 17:13:59.159287 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 17:13:59.169601 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:13:59.172140 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 17:13:59.182519 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 17:13:59.185139 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:13:59.186742 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:13:59.187850 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:13:59.189432 lvm[1464]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:13:59.188924 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:13:59.188963 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:13:59.194513 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:13:59.203619 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 17:13:59.210550 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:13:59.219428 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:13:59.229530 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:13:59.230644 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:13:59.234461 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:13:59.238680 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:13:59.243192 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Sep 12 17:13:59.250248 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:13:59.262109 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:13:59.268544 coreos-metadata[1466]: Sep 12 17:13:59.267 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Sep 12 17:13:59.268641 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:13:59.271077 coreos-metadata[1466]: Sep 12 17:13:59.269 INFO Fetch successful Sep 12 17:13:59.271077 coreos-metadata[1466]: Sep 12 17:13:59.270 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Sep 12 17:13:59.273160 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:13:59.273975 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:13:59.277650 coreos-metadata[1466]: Sep 12 17:13:59.275 INFO Fetch successful Sep 12 17:13:59.279561 jq[1470]: false Sep 12 17:13:59.280347 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:13:59.284302 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:13:59.289292 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 17:13:59.306141 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:13:59.307286 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:13:59.323049 jq[1480]: true Sep 12 17:13:59.331265 extend-filesystems[1471]: Found loop4 Sep 12 17:13:59.331265 extend-filesystems[1471]: Found loop5 Sep 12 17:13:59.331265 extend-filesystems[1471]: Found loop6 Sep 12 17:13:59.331265 extend-filesystems[1471]: Found loop7 Sep 12 17:13:59.331265 extend-filesystems[1471]: Found sda Sep 12 17:13:59.331265 extend-filesystems[1471]: Found sda1 Sep 12 17:13:59.331265 extend-filesystems[1471]: Found sda2 Sep 12 17:13:59.331265 extend-filesystems[1471]: Found sda3 Sep 12 17:13:59.331265 extend-filesystems[1471]: Found usr Sep 12 17:13:59.331265 extend-filesystems[1471]: Found sda4 Sep 12 17:13:59.331265 extend-filesystems[1471]: Found sda6 Sep 12 17:13:59.331265 extend-filesystems[1471]: Found sda7 Sep 12 17:13:59.331265 extend-filesystems[1471]: Found sda9 Sep 12 17:13:59.370992 dbus-daemon[1467]: [system] SELinux support is enabled Sep 12 17:13:59.399991 extend-filesystems[1471]: Checking size of /dev/sda9 Sep 12 17:13:59.399991 extend-filesystems[1471]: Resized partition /dev/sda9 Sep 12 17:13:59.364599 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:13:59.409581 extend-filesystems[1508]: resize2fs 1.47.1 (20-May-2024) Sep 12 17:13:59.364899 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:13:59.376022 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:13:59.420822 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Sep 12 17:13:59.398746 (ntainerd)[1496]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:13:59.401291 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:13:59.427032 tar[1492]: linux-arm64/LICENSE Sep 12 17:13:59.427032 tar[1492]: linux-arm64/helm Sep 12 17:13:59.401334 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:13:59.429276 jq[1493]: true Sep 12 17:13:59.402103 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:13:59.402119 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:13:59.436263 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:13:59.437138 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:13:59.488119 update_engine[1478]: I20250912 17:13:59.487835 1478 main.cc:92] Flatcar Update Engine starting Sep 12 17:13:59.499978 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 17:13:59.501034 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:13:59.506017 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:13:59.511565 update_engine[1478]: I20250912 17:13:59.510604 1478 update_check_scheduler.cc:74] Next update check in 7m27s Sep 12 17:13:59.531695 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1411) Sep 12 17:13:59.530710 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:13:59.614743 systemd-logind[1477]: New seat seat0. Sep 12 17:13:59.636344 systemd-logind[1477]: Watching system buttons on /dev/input/event0 (Power Button) Sep 12 17:13:59.636368 systemd-logind[1477]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Sep 12 17:13:59.636702 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:13:59.654395 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Sep 12 17:13:59.671821 bash[1536]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:13:59.681071 extend-filesystems[1508]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Sep 12 17:13:59.681071 extend-filesystems[1508]: old_desc_blocks = 1, new_desc_blocks = 5 Sep 12 17:13:59.681071 extend-filesystems[1508]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Sep 12 17:13:59.675196 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:13:59.695985 extend-filesystems[1471]: Resized filesystem in /dev/sda9 Sep 12 17:13:59.695985 extend-filesystems[1471]: Found sr0 Sep 12 17:13:59.702084 systemd[1]: Starting sshkeys.service... Sep 12 17:13:59.715517 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:13:59.715768 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:13:59.753306 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 12 17:13:59.762693 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 12 17:13:59.778762 containerd[1496]: time="2025-09-12T17:13:59.768018560Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Sep 12 17:13:59.811362 coreos-metadata[1550]: Sep 12 17:13:59.811 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Sep 12 17:13:59.813583 coreos-metadata[1550]: Sep 12 17:13:59.813 INFO Fetch successful Sep 12 17:13:59.818711 unknown[1550]: wrote ssh authorized keys file for user: core Sep 12 17:13:59.861520 update-ssh-keys[1554]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:13:59.863713 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 12 17:13:59.870155 systemd[1]: Finished sshkeys.service. Sep 12 17:13:59.874656 containerd[1496]: time="2025-09-12T17:13:59.874094000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:13:59.881128 containerd[1496]: time="2025-09-12T17:13:59.881065360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:13:59.881128 containerd[1496]: time="2025-09-12T17:13:59.881118640Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 17:13:59.881128 containerd[1496]: time="2025-09-12T17:13:59.881141920Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 17:13:59.881889 containerd[1496]: time="2025-09-12T17:13:59.881364360Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 17:13:59.881889 containerd[1496]: time="2025-09-12T17:13:59.881394720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 17:13:59.881889 containerd[1496]: time="2025-09-12T17:13:59.881475560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:13:59.881889 containerd[1496]: time="2025-09-12T17:13:59.881488440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:13:59.881889 containerd[1496]: time="2025-09-12T17:13:59.881748520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:13:59.881889 containerd[1496]: time="2025-09-12T17:13:59.881763520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 17:13:59.881889 containerd[1496]: time="2025-09-12T17:13:59.881778240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:13:59.881889 containerd[1496]: time="2025-09-12T17:13:59.881806640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 17:13:59.882045 containerd[1496]: time="2025-09-12T17:13:59.881902360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:13:59.882177 containerd[1496]: time="2025-09-12T17:13:59.882142040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:13:59.884575 containerd[1496]: time="2025-09-12T17:13:59.884527280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:13:59.884575 containerd[1496]: time="2025-09-12T17:13:59.884563560Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 17:13:59.884945 containerd[1496]: time="2025-09-12T17:13:59.884716440Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 17:13:59.884945 containerd[1496]: time="2025-09-12T17:13:59.884778160Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:13:59.892338 locksmithd[1526]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:13:59.898413 containerd[1496]: time="2025-09-12T17:13:59.898074320Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 17:13:59.898413 containerd[1496]: time="2025-09-12T17:13:59.898226480Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 17:13:59.898413 containerd[1496]: time="2025-09-12T17:13:59.898248800Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 17:13:59.898413 containerd[1496]: time="2025-09-12T17:13:59.898273200Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 17:13:59.898413 containerd[1496]: time="2025-09-12T17:13:59.898290480Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 17:13:59.898675 containerd[1496]: time="2025-09-12T17:13:59.898550800Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 17:13:59.899400 containerd[1496]: time="2025-09-12T17:13:59.898917440Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 17:13:59.899400 containerd[1496]: time="2025-09-12T17:13:59.899067040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 17:13:59.899400 containerd[1496]: time="2025-09-12T17:13:59.899087520Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 17:13:59.899400 containerd[1496]: time="2025-09-12T17:13:59.899104600Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 17:13:59.899400 containerd[1496]: time="2025-09-12T17:13:59.899123280Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 17:13:59.899400 containerd[1496]: time="2025-09-12T17:13:59.899139120Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 17:13:59.899400 containerd[1496]: time="2025-09-12T17:13:59.899154400Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 17:13:59.899400 containerd[1496]: time="2025-09-12T17:13:59.899184560Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 17:13:59.899400 containerd[1496]: time="2025-09-12T17:13:59.899288600Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 17:13:59.899400 containerd[1496]: time="2025-09-12T17:13:59.899309480Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 17:13:59.899400 containerd[1496]: time="2025-09-12T17:13:59.899333720Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 17:13:59.899400 containerd[1496]: time="2025-09-12T17:13:59.899353400Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 17:13:59.899400 containerd[1496]: time="2025-09-12T17:13:59.899379640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 17:13:59.900669 containerd[1496]: time="2025-09-12T17:13:59.899396800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 17:13:59.900669 containerd[1496]: time="2025-09-12T17:13:59.900237800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 17:13:59.900669 containerd[1496]: time="2025-09-12T17:13:59.900259280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 17:13:59.900669 containerd[1496]: time="2025-09-12T17:13:59.900272600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 17:13:59.900669 containerd[1496]: time="2025-09-12T17:13:59.900296960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 17:13:59.900669 containerd[1496]: time="2025-09-12T17:13:59.900310600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 17:13:59.900669 containerd[1496]: time="2025-09-12T17:13:59.900326040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 17:13:59.900669 containerd[1496]: time="2025-09-12T17:13:59.900340880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 17:13:59.900669 containerd[1496]: time="2025-09-12T17:13:59.900367680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 17:13:59.900669 containerd[1496]: time="2025-09-12T17:13:59.900382320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 17:13:59.900669 containerd[1496]: time="2025-09-12T17:13:59.900395240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 17:13:59.900669 containerd[1496]: time="2025-09-12T17:13:59.900411440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 17:13:59.900669 containerd[1496]: time="2025-09-12T17:13:59.900430440Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 17:13:59.900669 containerd[1496]: time="2025-09-12T17:13:59.900490960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 17:13:59.900669 containerd[1496]: time="2025-09-12T17:13:59.900518600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 17:13:59.901135 containerd[1496]: time="2025-09-12T17:13:59.900532680Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 17:13:59.901135 containerd[1496]: time="2025-09-12T17:13:59.900759560Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 17:13:59.901135 containerd[1496]: time="2025-09-12T17:13:59.900803440Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 17:13:59.901135 containerd[1496]: time="2025-09-12T17:13:59.900817440Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 17:13:59.901135 containerd[1496]: time="2025-09-12T17:13:59.900831000Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 17:13:59.901135 containerd[1496]: time="2025-09-12T17:13:59.900840200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 17:13:59.901135 containerd[1496]: time="2025-09-12T17:13:59.900853440Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 17:13:59.901286 containerd[1496]: time="2025-09-12T17:13:59.901232440Z" level=info msg="NRI interface is disabled by configuration." Sep 12 17:13:59.901286 containerd[1496]: time="2025-09-12T17:13:59.901251800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 17:13:59.904076 containerd[1496]: time="2025-09-12T17:13:59.903491080Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 17:13:59.904076 containerd[1496]: time="2025-09-12T17:13:59.903576240Z" level=info msg="Connect containerd service" Sep 12 17:13:59.904076 containerd[1496]: time="2025-09-12T17:13:59.903649440Z" level=info msg="using legacy CRI server" Sep 12 17:13:59.904076 containerd[1496]: time="2025-09-12T17:13:59.903658600Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:13:59.904076 containerd[1496]: time="2025-09-12T17:13:59.903972800Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 17:13:59.907853 containerd[1496]: time="2025-09-12T17:13:59.906731120Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:13:59.907853 containerd[1496]: time="2025-09-12T17:13:59.907218880Z" level=info msg="Start subscribing containerd event" Sep 12 17:13:59.907853 containerd[1496]: time="2025-09-12T17:13:59.907297680Z" level=info msg="Start recovering state" Sep 12 17:13:59.907853 containerd[1496]: time="2025-09-12T17:13:59.907395360Z" level=info msg="Start event monitor" Sep 12 17:13:59.907853 containerd[1496]: time="2025-09-12T17:13:59.907410840Z" level=info msg="Start snapshots syncer" Sep 12 17:13:59.907853 containerd[1496]: time="2025-09-12T17:13:59.907433720Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:13:59.907853 containerd[1496]: time="2025-09-12T17:13:59.907443160Z" level=info msg="Start streaming server" Sep 12 17:13:59.908764 containerd[1496]: time="2025-09-12T17:13:59.908267840Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:13:59.908764 containerd[1496]: time="2025-09-12T17:13:59.908370160Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:13:59.908602 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:13:59.909403 containerd[1496]: time="2025-09-12T17:13:59.909360840Z" level=info msg="containerd successfully booted in 0.143991s" Sep 12 17:14:00.177920 tar[1492]: linux-arm64/README.md Sep 12 17:14:00.193463 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:14:00.472479 systemd-networkd[1395]: eth0: Gained IPv6LL Sep 12 17:14:00.473345 systemd-timesyncd[1385]: Network configuration changed, trying to establish connection. Sep 12 17:14:00.481058 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:14:00.485146 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:14:00.495263 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:14:00.503614 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:14:00.573586 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:14:00.728461 systemd-networkd[1395]: eth1: Gained IPv6LL Sep 12 17:14:00.730434 systemd-timesyncd[1385]: Network configuration changed, trying to establish connection. Sep 12 17:14:01.377650 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:14:01.380789 (kubelet)[1582]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:14:01.693727 sshd_keygen[1513]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:14:01.728861 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:14:01.741612 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:14:01.751530 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:14:01.751800 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:14:01.762870 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:14:01.779988 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:14:01.791789 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:14:01.794603 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 12 17:14:01.797686 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:14:01.800398 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:14:01.802333 systemd[1]: Startup finished in 789ms (kernel) + 6.326s (initrd) + 5.829s (userspace) = 12.945s. Sep 12 17:14:01.963651 kubelet[1582]: E0912 17:14:01.963457 1582 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:14:01.966119 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:14:01.966315 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:14:01.967338 systemd[1]: kubelet.service: Consumed 921ms CPU time, 258.1M memory peak. Sep 12 17:14:12.163355 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:14:12.172600 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:14:12.317350 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:14:12.329995 (kubelet)[1618]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:14:12.388288 kubelet[1618]: E0912 17:14:12.388184 1618 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:14:12.394400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:14:12.394918 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:14:12.395824 systemd[1]: kubelet.service: Consumed 191ms CPU time, 104.9M memory peak. Sep 12 17:14:21.527495 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:14:21.534854 systemd[1]: Started sshd@0-168.119.179.98:22-139.178.68.195:49450.service - OpenSSH per-connection server daemon (139.178.68.195:49450). Sep 12 17:14:22.412841 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 17:14:22.419678 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:14:22.547251 sshd[1626]: Accepted publickey for core from 139.178.68.195 port 49450 ssh2: RSA SHA256:4I4ir6DTNicv1nR1BCNJAvLWYZ+QnMBQBVoKHg57aHc Sep 12 17:14:22.551635 sshd-session[1626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:14:22.580537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:14:22.592893 (kubelet)[1635]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:14:22.598490 systemd-logind[1477]: New session 1 of user core. Sep 12 17:14:22.602847 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:14:22.612852 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:14:22.642258 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:14:22.655007 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:14:22.664126 kubelet[1635]: E0912 17:14:22.663969 1635 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:14:22.666688 (systemd)[1644]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:14:22.670052 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:14:22.671351 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:14:22.673428 systemd[1]: kubelet.service: Consumed 207ms CPU time, 109.3M memory peak. Sep 12 17:14:22.676025 systemd-logind[1477]: New session c1 of user core. Sep 12 17:14:22.842002 systemd[1644]: Queued start job for default target default.target. Sep 12 17:14:22.854603 systemd[1644]: Created slice app.slice - User Application Slice. Sep 12 17:14:22.855003 systemd[1644]: Reached target paths.target - Paths. Sep 12 17:14:22.855458 systemd[1644]: Reached target timers.target - Timers. Sep 12 17:14:22.858961 systemd[1644]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:14:22.877590 systemd[1644]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:14:22.878051 systemd[1644]: Reached target sockets.target - Sockets. Sep 12 17:14:22.878135 systemd[1644]: Reached target basic.target - Basic System. Sep 12 17:14:22.878178 systemd[1644]: Reached target default.target - Main User Target. Sep 12 17:14:22.878235 systemd[1644]: Startup finished in 190ms. Sep 12 17:14:22.878963 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:14:22.895679 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:14:23.607903 systemd[1]: Started sshd@1-168.119.179.98:22-139.178.68.195:49460.service - OpenSSH per-connection server daemon (139.178.68.195:49460). Sep 12 17:14:24.596837 sshd[1657]: Accepted publickey for core from 139.178.68.195 port 49460 ssh2: RSA SHA256:4I4ir6DTNicv1nR1BCNJAvLWYZ+QnMBQBVoKHg57aHc Sep 12 17:14:24.599958 sshd-session[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:14:24.609264 systemd-logind[1477]: New session 2 of user core. Sep 12 17:14:24.616919 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:14:25.279513 sshd[1659]: Connection closed by 139.178.68.195 port 49460 Sep 12 17:14:25.279348 sshd-session[1657]: pam_unix(sshd:session): session closed for user core Sep 12 17:14:25.283986 systemd[1]: sshd@1-168.119.179.98:22-139.178.68.195:49460.service: Deactivated successfully. Sep 12 17:14:25.287053 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 17:14:25.290604 systemd-logind[1477]: Session 2 logged out. Waiting for processes to exit. Sep 12 17:14:25.292164 systemd-logind[1477]: Removed session 2. Sep 12 17:14:25.466902 systemd[1]: Started sshd@2-168.119.179.98:22-139.178.68.195:49476.service - OpenSSH per-connection server daemon (139.178.68.195:49476). Sep 12 17:14:26.458595 sshd[1665]: Accepted publickey for core from 139.178.68.195 port 49476 ssh2: RSA SHA256:4I4ir6DTNicv1nR1BCNJAvLWYZ+QnMBQBVoKHg57aHc Sep 12 17:14:26.460880 sshd-session[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:14:26.467607 systemd-logind[1477]: New session 3 of user core. Sep 12 17:14:26.478687 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:14:27.139161 sshd[1667]: Connection closed by 139.178.68.195 port 49476 Sep 12 17:14:27.140503 sshd-session[1665]: pam_unix(sshd:session): session closed for user core Sep 12 17:14:27.148346 systemd[1]: sshd@2-168.119.179.98:22-139.178.68.195:49476.service: Deactivated successfully. Sep 12 17:14:27.152050 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 17:14:27.153531 systemd-logind[1477]: Session 3 logged out. Waiting for processes to exit. Sep 12 17:14:27.155651 systemd-logind[1477]: Removed session 3. Sep 12 17:14:27.320911 systemd[1]: Started sshd@3-168.119.179.98:22-139.178.68.195:49478.service - OpenSSH per-connection server daemon (139.178.68.195:49478). Sep 12 17:14:28.315013 sshd[1673]: Accepted publickey for core from 139.178.68.195 port 49478 ssh2: RSA SHA256:4I4ir6DTNicv1nR1BCNJAvLWYZ+QnMBQBVoKHg57aHc Sep 12 17:14:28.317819 sshd-session[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:14:28.328638 systemd-logind[1477]: New session 4 of user core. Sep 12 17:14:28.336142 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:14:29.001769 sshd[1675]: Connection closed by 139.178.68.195 port 49478 Sep 12 17:14:29.003056 sshd-session[1673]: pam_unix(sshd:session): session closed for user core Sep 12 17:14:29.008498 systemd-logind[1477]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:14:29.009271 systemd[1]: sshd@3-168.119.179.98:22-139.178.68.195:49478.service: Deactivated successfully. Sep 12 17:14:29.012512 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:14:29.015718 systemd-logind[1477]: Removed session 4. Sep 12 17:14:29.183849 systemd[1]: Started sshd@4-168.119.179.98:22-139.178.68.195:49492.service - OpenSSH per-connection server daemon (139.178.68.195:49492). Sep 12 17:14:30.175141 sshd[1681]: Accepted publickey for core from 139.178.68.195 port 49492 ssh2: RSA SHA256:4I4ir6DTNicv1nR1BCNJAvLWYZ+QnMBQBVoKHg57aHc Sep 12 17:14:30.177503 sshd-session[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:14:30.187623 systemd-logind[1477]: New session 5 of user core. Sep 12 17:14:30.193711 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:14:30.713829 sudo[1684]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:14:30.714315 sudo[1684]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:14:30.738071 sudo[1684]: pam_unix(sudo:session): session closed for user root Sep 12 17:14:30.899320 sshd[1683]: Connection closed by 139.178.68.195 port 49492 Sep 12 17:14:30.900797 sshd-session[1681]: pam_unix(sshd:session): session closed for user core Sep 12 17:14:30.907763 systemd[1]: sshd@4-168.119.179.98:22-139.178.68.195:49492.service: Deactivated successfully. Sep 12 17:14:30.911312 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:14:30.915157 systemd-logind[1477]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:14:30.918269 systemd-timesyncd[1385]: Contacted time server 159.69.64.189:123 (2.flatcar.pool.ntp.org). Sep 12 17:14:30.918369 systemd-timesyncd[1385]: Initial clock synchronization to Fri 2025-09-12 17:14:31.229235 UTC. Sep 12 17:14:30.919035 systemd-logind[1477]: Removed session 5. Sep 12 17:14:31.081939 systemd[1]: Started sshd@5-168.119.179.98:22-139.178.68.195:57142.service - OpenSSH per-connection server daemon (139.178.68.195:57142). Sep 12 17:14:32.120118 sshd[1690]: Accepted publickey for core from 139.178.68.195 port 57142 ssh2: RSA SHA256:4I4ir6DTNicv1nR1BCNJAvLWYZ+QnMBQBVoKHg57aHc Sep 12 17:14:32.123770 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:14:32.132453 systemd-logind[1477]: New session 6 of user core. Sep 12 17:14:32.133355 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:14:32.667710 sudo[1694]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:14:32.668190 sudo[1694]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:14:32.671775 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 12 17:14:32.678460 sudo[1694]: pam_unix(sudo:session): session closed for user root Sep 12 17:14:32.680940 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:14:32.691781 sudo[1693]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 17:14:32.692190 sudo[1693]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:14:32.737588 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:14:32.830571 augenrules[1719]: No rules Sep 12 17:14:32.833891 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:14:32.835550 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:14:32.838881 sudo[1693]: pam_unix(sudo:session): session closed for user root Sep 12 17:14:32.900503 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:14:32.915352 (kubelet)[1729]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:14:32.984138 kubelet[1729]: E0912 17:14:32.983838 1729 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:14:32.989198 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:14:32.989447 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:14:32.989897 systemd[1]: kubelet.service: Consumed 222ms CPU time, 107.4M memory peak. Sep 12 17:14:33.008847 sshd[1692]: Connection closed by 139.178.68.195 port 57142 Sep 12 17:14:33.009591 sshd-session[1690]: pam_unix(sshd:session): session closed for user core Sep 12 17:14:33.016870 systemd[1]: sshd@5-168.119.179.98:22-139.178.68.195:57142.service: Deactivated successfully. Sep 12 17:14:33.020458 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:14:33.021648 systemd-logind[1477]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:14:33.025297 systemd-logind[1477]: Removed session 6. Sep 12 17:14:33.197358 systemd[1]: Started sshd@6-168.119.179.98:22-139.178.68.195:57146.service - OpenSSH per-connection server daemon (139.178.68.195:57146). Sep 12 17:14:34.210172 sshd[1740]: Accepted publickey for core from 139.178.68.195 port 57146 ssh2: RSA SHA256:4I4ir6DTNicv1nR1BCNJAvLWYZ+QnMBQBVoKHg57aHc Sep 12 17:14:34.212873 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:14:34.220628 systemd-logind[1477]: New session 7 of user core. Sep 12 17:14:34.229807 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:14:34.746627 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:14:34.747085 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:14:35.173945 (dockerd)[1760]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:14:35.174004 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:14:35.462818 dockerd[1760]: time="2025-09-12T17:14:35.462546963Z" level=info msg="Starting up" Sep 12 17:14:35.562498 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport573909204-merged.mount: Deactivated successfully. Sep 12 17:14:35.598365 dockerd[1760]: time="2025-09-12T17:14:35.597791615Z" level=info msg="Loading containers: start." Sep 12 17:14:35.827523 kernel: Initializing XFRM netlink socket Sep 12 17:14:35.972918 systemd-networkd[1395]: docker0: Link UP Sep 12 17:14:36.017263 dockerd[1760]: time="2025-09-12T17:14:36.016449357Z" level=info msg="Loading containers: done." Sep 12 17:14:36.035389 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1001538396-merged.mount: Deactivated successfully. Sep 12 17:14:36.038844 dockerd[1760]: time="2025-09-12T17:14:36.038748764Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:14:36.040373 dockerd[1760]: time="2025-09-12T17:14:36.039333817Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Sep 12 17:14:36.040373 dockerd[1760]: time="2025-09-12T17:14:36.039725852Z" level=info msg="Daemon has completed initialization" Sep 12 17:14:36.094274 dockerd[1760]: time="2025-09-12T17:14:36.094111246Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:14:36.094470 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:14:37.326439 containerd[1496]: time="2025-09-12T17:14:37.325936731Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 12 17:14:38.002533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2414188195.mount: Deactivated successfully. Sep 12 17:14:39.012368 containerd[1496]: time="2025-09-12T17:14:39.011436947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:39.014972 containerd[1496]: time="2025-09-12T17:14:39.014142661Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=27390326" Sep 12 17:14:39.018266 containerd[1496]: time="2025-09-12T17:14:39.016698112Z" level=info msg="ImageCreate event name:\"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:39.024108 containerd[1496]: time="2025-09-12T17:14:39.024026118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:39.027649 containerd[1496]: time="2025-09-12T17:14:39.027573785Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"27386827\" in 1.70156048s" Sep 12 17:14:39.027649 containerd[1496]: time="2025-09-12T17:14:39.027653072Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\"" Sep 12 17:14:39.030367 containerd[1496]: time="2025-09-12T17:14:39.030308928Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 12 17:14:40.272268 containerd[1496]: time="2025-09-12T17:14:40.271333064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:40.277255 containerd[1496]: time="2025-09-12T17:14:40.275839720Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=23547937" Sep 12 17:14:40.280834 containerd[1496]: time="2025-09-12T17:14:40.280754336Z" level=info msg="ImageCreate event name:\"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:40.287252 containerd[1496]: time="2025-09-12T17:14:40.287089869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:40.289783 containerd[1496]: time="2025-09-12T17:14:40.289680034Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"25135832\" in 1.259301172s" Sep 12 17:14:40.289783 containerd[1496]: time="2025-09-12T17:14:40.289771492Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\"" Sep 12 17:14:40.291527 containerd[1496]: time="2025-09-12T17:14:40.290946162Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 12 17:14:41.411037 containerd[1496]: time="2025-09-12T17:14:41.409050201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:41.411037 containerd[1496]: time="2025-09-12T17:14:41.410927472Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=18295997" Sep 12 17:14:41.412539 containerd[1496]: time="2025-09-12T17:14:41.411992589Z" level=info msg="ImageCreate event name:\"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:41.417001 containerd[1496]: time="2025-09-12T17:14:41.416911251Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:41.418590 containerd[1496]: time="2025-09-12T17:14:41.418516362Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"19883910\" in 1.127423366s" Sep 12 17:14:41.418590 containerd[1496]: time="2025-09-12T17:14:41.418584533Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\"" Sep 12 17:14:41.419722 containerd[1496]: time="2025-09-12T17:14:41.419647509Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 12 17:14:42.403969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount394518253.mount: Deactivated successfully. Sep 12 17:14:42.778851 containerd[1496]: time="2025-09-12T17:14:42.778764527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:42.780902 containerd[1496]: time="2025-09-12T17:14:42.780810983Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=28240132" Sep 12 17:14:42.782144 containerd[1496]: time="2025-09-12T17:14:42.782095309Z" level=info msg="ImageCreate event name:\"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:42.786571 containerd[1496]: time="2025-09-12T17:14:42.786476096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:42.789287 containerd[1496]: time="2025-09-12T17:14:42.789224788Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"28239125\" in 1.369483776s" Sep 12 17:14:42.789287 containerd[1496]: time="2025-09-12T17:14:42.789296788Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\"" Sep 12 17:14:42.790175 containerd[1496]: time="2025-09-12T17:14:42.790142943Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 12 17:14:43.163198 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 12 17:14:43.173734 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:14:43.371781 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:14:43.384011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3192391035.mount: Deactivated successfully. Sep 12 17:14:43.385915 (kubelet)[2031]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:14:43.459228 kubelet[2031]: E0912 17:14:43.458899 2031 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:14:43.464397 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:14:43.464585 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:14:43.467669 systemd[1]: kubelet.service: Consumed 208ms CPU time, 106.2M memory peak. Sep 12 17:14:44.247570 containerd[1496]: time="2025-09-12T17:14:44.247470389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:44.249472 containerd[1496]: time="2025-09-12T17:14:44.249256721Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152209" Sep 12 17:14:44.253282 containerd[1496]: time="2025-09-12T17:14:44.251983059Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:44.257908 containerd[1496]: time="2025-09-12T17:14:44.256939632Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:44.262023 containerd[1496]: time="2025-09-12T17:14:44.261901724Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.4717041s" Sep 12 17:14:44.262023 containerd[1496]: time="2025-09-12T17:14:44.262006155Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 12 17:14:44.265588 containerd[1496]: time="2025-09-12T17:14:44.265504433Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:14:44.787983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount86464100.mount: Deactivated successfully. Sep 12 17:14:44.799644 containerd[1496]: time="2025-09-12T17:14:44.799508390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:44.800980 containerd[1496]: time="2025-09-12T17:14:44.800879412Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Sep 12 17:14:44.802257 containerd[1496]: time="2025-09-12T17:14:44.801619978Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:44.805352 containerd[1496]: time="2025-09-12T17:14:44.805238434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:44.807020 containerd[1496]: time="2025-09-12T17:14:44.806107033Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 540.517701ms" Sep 12 17:14:44.807020 containerd[1496]: time="2025-09-12T17:14:44.806164665Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 12 17:14:44.807294 containerd[1496]: time="2025-09-12T17:14:44.807109665Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 12 17:14:44.858645 update_engine[1478]: I20250912 17:14:44.858461 1478 update_attempter.cc:509] Updating boot flags... Sep 12 17:14:44.928353 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2101) Sep 12 17:14:45.048860 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2104) Sep 12 17:14:45.396743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2994661551.mount: Deactivated successfully. Sep 12 17:14:46.943941 containerd[1496]: time="2025-09-12T17:14:46.943812295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:46.947575 containerd[1496]: time="2025-09-12T17:14:46.946916636Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465913" Sep 12 17:14:46.950811 containerd[1496]: time="2025-09-12T17:14:46.950660679Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:46.958262 containerd[1496]: time="2025-09-12T17:14:46.958141931Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:14:46.960933 containerd[1496]: time="2025-09-12T17:14:46.960842361Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.153680816s" Sep 12 17:14:46.961690 containerd[1496]: time="2025-09-12T17:14:46.961259622Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 12 17:14:53.577048 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 12 17:14:53.589407 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:14:53.596425 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 17:14:53.596893 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 17:14:53.597509 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:14:53.607851 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:14:53.659058 systemd[1]: Reload requested from client PID 2194 ('systemctl') (unit session-7.scope)... Sep 12 17:14:53.659096 systemd[1]: Reloading... Sep 12 17:14:53.864284 zram_generator::config[2242]: No configuration found. Sep 12 17:14:53.986219 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:14:54.086120 systemd[1]: Reloading finished in 426 ms. Sep 12 17:14:54.170619 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:14:54.174169 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:14:54.175295 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:14:54.175395 systemd[1]: kubelet.service: Consumed 163ms CPU time, 94.3M memory peak. Sep 12 17:14:54.184814 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:14:54.363694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:14:54.368047 (kubelet)[2289]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:14:54.429478 kubelet[2289]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:14:54.429478 kubelet[2289]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:14:54.429478 kubelet[2289]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:14:54.430169 kubelet[2289]: I0912 17:14:54.429524 2289 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:14:55.052848 kubelet[2289]: I0912 17:14:55.052767 2289 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 17:14:55.052848 kubelet[2289]: I0912 17:14:55.052823 2289 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:14:55.053328 kubelet[2289]: I0912 17:14:55.053303 2289 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 17:14:55.078435 kubelet[2289]: E0912 17:14:55.078322 2289 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://168.119.179.98:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 168.119.179.98:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 12 17:14:55.080993 kubelet[2289]: I0912 17:14:55.080405 2289 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:14:55.101273 kubelet[2289]: E0912 17:14:55.101154 2289 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:14:55.101589 kubelet[2289]: I0912 17:14:55.101572 2289 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:14:55.105952 kubelet[2289]: I0912 17:14:55.105892 2289 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:14:55.109446 kubelet[2289]: I0912 17:14:55.108242 2289 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:14:55.109446 kubelet[2289]: I0912 17:14:55.108365 2289 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-2-3-6-9297726d8a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:14:55.109446 kubelet[2289]: I0912 17:14:55.108717 2289 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:14:55.109446 kubelet[2289]: I0912 17:14:55.108730 2289 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 17:14:55.109446 kubelet[2289]: I0912 17:14:55.109053 2289 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:14:55.114528 kubelet[2289]: I0912 17:14:55.114453 2289 kubelet.go:480] "Attempting to sync node with API server" Sep 12 17:14:55.114880 kubelet[2289]: I0912 17:14:55.114866 2289 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:14:55.114999 kubelet[2289]: I0912 17:14:55.114988 2289 kubelet.go:386] "Adding apiserver pod source" Sep 12 17:14:55.116517 kubelet[2289]: I0912 17:14:55.116456 2289 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:14:55.122842 kubelet[2289]: E0912 17:14:55.122744 2289 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://168.119.179.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-3-6-9297726d8a&limit=500&resourceVersion=0\": dial tcp 168.119.179.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 17:14:55.124365 kubelet[2289]: E0912 17:14:55.124297 2289 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://168.119.179.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 168.119.179.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 17:14:55.124577 kubelet[2289]: I0912 17:14:55.124550 2289 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 12 17:14:55.125694 kubelet[2289]: I0912 17:14:55.125663 2289 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 17:14:55.125880 kubelet[2289]: W0912 17:14:55.125857 2289 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:14:55.132277 kubelet[2289]: I0912 17:14:55.132232 2289 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:14:55.132515 kubelet[2289]: I0912 17:14:55.132358 2289 server.go:1289] "Started kubelet" Sep 12 17:14:55.139089 kubelet[2289]: I0912 17:14:55.138596 2289 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:14:55.141764 kubelet[2289]: E0912 17:14:55.140257 2289 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://168.119.179.98:6443/api/v1/namespaces/default/events\": dial tcp 168.119.179.98:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-2-3-6-9297726d8a.18649860930614af default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-2-3-6-9297726d8a,UID:ci-4230-2-3-6-9297726d8a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-2-3-6-9297726d8a,},FirstTimestamp:2025-09-12 17:14:55.132267695 +0000 UTC m=+0.757499064,LastTimestamp:2025-09-12 17:14:55.132267695 +0000 UTC m=+0.757499064,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-2-3-6-9297726d8a,}" Sep 12 17:14:55.144307 kubelet[2289]: I0912 17:14:55.142992 2289 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:14:55.145700 kubelet[2289]: I0912 17:14:55.145659 2289 server.go:317] "Adding debug handlers to kubelet server" Sep 12 17:14:55.150432 kubelet[2289]: I0912 17:14:55.150366 2289 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:14:55.153912 kubelet[2289]: I0912 17:14:55.150768 2289 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:14:55.157298 kubelet[2289]: E0912 17:14:55.151664 2289 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-2-3-6-9297726d8a\" not found" Sep 12 17:14:55.157298 kubelet[2289]: I0912 17:14:55.154550 2289 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:14:55.159241 kubelet[2289]: I0912 17:14:55.155285 2289 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:14:55.159241 kubelet[2289]: I0912 17:14:55.155374 2289 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:14:55.159241 kubelet[2289]: E0912 17:14:55.159002 2289 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.179.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-3-6-9297726d8a?timeout=10s\": dial tcp 168.119.179.98:6443: connect: connection refused" interval="200ms" Sep 12 17:14:55.160317 kubelet[2289]: I0912 17:14:55.159939 2289 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:14:55.160974 kubelet[2289]: E0912 17:14:55.160932 2289 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://168.119.179.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 168.119.179.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 17:14:55.161882 kubelet[2289]: I0912 17:14:55.161445 2289 factory.go:223] Registration of the systemd container factory successfully Sep 12 17:14:55.161882 kubelet[2289]: I0912 17:14:55.161619 2289 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:14:55.164977 kubelet[2289]: E0912 17:14:55.164923 2289 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:14:55.165570 kubelet[2289]: I0912 17:14:55.165543 2289 factory.go:223] Registration of the containerd container factory successfully Sep 12 17:14:55.195849 kubelet[2289]: I0912 17:14:55.195795 2289 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:14:55.195849 kubelet[2289]: I0912 17:14:55.195829 2289 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:14:55.195849 kubelet[2289]: I0912 17:14:55.195866 2289 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:14:55.200801 kubelet[2289]: I0912 17:14:55.200501 2289 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 17:14:55.204908 kubelet[2289]: I0912 17:14:55.204852 2289 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 17:14:55.205801 kubelet[2289]: I0912 17:14:55.205173 2289 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 17:14:55.205801 kubelet[2289]: I0912 17:14:55.205311 2289 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:14:55.205801 kubelet[2289]: I0912 17:14:55.205325 2289 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 17:14:55.205801 kubelet[2289]: E0912 17:14:55.205396 2289 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:14:55.210131 kubelet[2289]: I0912 17:14:55.210074 2289 policy_none.go:49] "None policy: Start" Sep 12 17:14:55.210483 kubelet[2289]: I0912 17:14:55.210466 2289 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:14:55.210580 kubelet[2289]: I0912 17:14:55.210571 2289 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:14:55.212450 kubelet[2289]: E0912 17:14:55.212293 2289 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://168.119.179.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 168.119.179.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 17:14:55.220732 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 17:14:55.238541 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 17:14:55.245391 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 17:14:55.256438 kubelet[2289]: E0912 17:14:55.256374 2289 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 17:14:55.258164 kubelet[2289]: I0912 17:14:55.257504 2289 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:14:55.258164 kubelet[2289]: E0912 17:14:55.257553 2289 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-2-3-6-9297726d8a\" not found" Sep 12 17:14:55.258164 kubelet[2289]: I0912 17:14:55.257550 2289 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:14:55.259420 kubelet[2289]: I0912 17:14:55.259117 2289 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:14:55.262128 kubelet[2289]: E0912 17:14:55.262075 2289 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:14:55.262474 kubelet[2289]: E0912 17:14:55.262153 2289 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-2-3-6-9297726d8a\" not found" Sep 12 17:14:55.330838 systemd[1]: Created slice kubepods-burstable-pod28c935feed4027c4ff640f80bfcbead3.slice - libcontainer container kubepods-burstable-pod28c935feed4027c4ff640f80bfcbead3.slice. Sep 12 17:14:55.348645 kubelet[2289]: E0912 17:14:55.348549 2289 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-3-6-9297726d8a\" not found" node="ci-4230-2-3-6-9297726d8a" Sep 12 17:14:55.355181 systemd[1]: Created slice kubepods-burstable-podf18fb4cf9a2e4903396bd7315bc07717.slice - libcontainer container kubepods-burstable-podf18fb4cf9a2e4903396bd7315bc07717.slice. Sep 12 17:14:55.359517 kubelet[2289]: I0912 17:14:55.359443 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/28c935feed4027c4ff640f80bfcbead3-ca-certs\") pod \"kube-apiserver-ci-4230-2-3-6-9297726d8a\" (UID: \"28c935feed4027c4ff640f80bfcbead3\") " pod="kube-system/kube-apiserver-ci-4230-2-3-6-9297726d8a" Sep 12 17:14:55.359517 kubelet[2289]: I0912 17:14:55.359495 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/28c935feed4027c4ff640f80bfcbead3-k8s-certs\") pod \"kube-apiserver-ci-4230-2-3-6-9297726d8a\" (UID: \"28c935feed4027c4ff640f80bfcbead3\") " pod="kube-system/kube-apiserver-ci-4230-2-3-6-9297726d8a" Sep 12 17:14:55.359765 kubelet[2289]: I0912 17:14:55.359541 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/28c935feed4027c4ff640f80bfcbead3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-2-3-6-9297726d8a\" (UID: \"28c935feed4027c4ff640f80bfcbead3\") " pod="kube-system/kube-apiserver-ci-4230-2-3-6-9297726d8a" Sep 12 17:14:55.359765 kubelet[2289]: I0912 17:14:55.359568 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f18fb4cf9a2e4903396bd7315bc07717-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-2-3-6-9297726d8a\" (UID: \"f18fb4cf9a2e4903396bd7315bc07717\") " pod="kube-system/kube-controller-manager-ci-4230-2-3-6-9297726d8a" Sep 12 17:14:55.359765 kubelet[2289]: I0912 17:14:55.359625 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f18fb4cf9a2e4903396bd7315bc07717-ca-certs\") pod \"kube-controller-manager-ci-4230-2-3-6-9297726d8a\" (UID: \"f18fb4cf9a2e4903396bd7315bc07717\") " pod="kube-system/kube-controller-manager-ci-4230-2-3-6-9297726d8a" Sep 12 17:14:55.359765 kubelet[2289]: I0912 17:14:55.359653 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f18fb4cf9a2e4903396bd7315bc07717-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-2-3-6-9297726d8a\" (UID: \"f18fb4cf9a2e4903396bd7315bc07717\") " pod="kube-system/kube-controller-manager-ci-4230-2-3-6-9297726d8a" Sep 12 17:14:55.359765 kubelet[2289]: I0912 17:14:55.359676 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f18fb4cf9a2e4903396bd7315bc07717-k8s-certs\") pod \"kube-controller-manager-ci-4230-2-3-6-9297726d8a\" (UID: \"f18fb4cf9a2e4903396bd7315bc07717\") " pod="kube-system/kube-controller-manager-ci-4230-2-3-6-9297726d8a" Sep 12 17:14:55.359991 kubelet[2289]: I0912 17:14:55.359693 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f18fb4cf9a2e4903396bd7315bc07717-kubeconfig\") pod \"kube-controller-manager-ci-4230-2-3-6-9297726d8a\" (UID: \"f18fb4cf9a2e4903396bd7315bc07717\") " pod="kube-system/kube-controller-manager-ci-4230-2-3-6-9297726d8a" Sep 12 17:14:55.359991 kubelet[2289]: I0912 17:14:55.359713 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3a913359766ec6f17dcf1421499ff892-kubeconfig\") pod \"kube-scheduler-ci-4230-2-3-6-9297726d8a\" (UID: \"3a913359766ec6f17dcf1421499ff892\") " pod="kube-system/kube-scheduler-ci-4230-2-3-6-9297726d8a" Sep 12 17:14:55.361453 kubelet[2289]: E0912 17:14:55.360981 2289 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-3-6-9297726d8a\" not found" node="ci-4230-2-3-6-9297726d8a" Sep 12 17:14:55.361453 kubelet[2289]: E0912 17:14:55.361355 2289 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.179.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-3-6-9297726d8a?timeout=10s\": dial tcp 168.119.179.98:6443: connect: connection refused" interval="400ms" Sep 12 17:14:55.361751 kubelet[2289]: I0912 17:14:55.361711 2289 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-3-6-9297726d8a" Sep 12 17:14:55.363842 kubelet[2289]: E0912 17:14:55.363751 2289 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://168.119.179.98:6443/api/v1/nodes\": dial tcp 168.119.179.98:6443: connect: connection refused" node="ci-4230-2-3-6-9297726d8a" Sep 12 17:14:55.366409 systemd[1]: Created slice kubepods-burstable-pod3a913359766ec6f17dcf1421499ff892.slice - libcontainer container kubepods-burstable-pod3a913359766ec6f17dcf1421499ff892.slice. Sep 12 17:14:55.369461 kubelet[2289]: E0912 17:14:55.369397 2289 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-3-6-9297726d8a\" not found" node="ci-4230-2-3-6-9297726d8a" Sep 12 17:14:55.568400 kubelet[2289]: I0912 17:14:55.568321 2289 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-3-6-9297726d8a" Sep 12 17:14:55.569412 kubelet[2289]: E0912 17:14:55.569115 2289 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://168.119.179.98:6443/api/v1/nodes\": dial tcp 168.119.179.98:6443: connect: connection refused" node="ci-4230-2-3-6-9297726d8a" Sep 12 17:14:55.652909 containerd[1496]: time="2025-09-12T17:14:55.652644696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-2-3-6-9297726d8a,Uid:28c935feed4027c4ff640f80bfcbead3,Namespace:kube-system,Attempt:0,}" Sep 12 17:14:55.663508 containerd[1496]: time="2025-09-12T17:14:55.663404021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-2-3-6-9297726d8a,Uid:f18fb4cf9a2e4903396bd7315bc07717,Namespace:kube-system,Attempt:0,}" Sep 12 17:14:55.672258 containerd[1496]: time="2025-09-12T17:14:55.671782312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-2-3-6-9297726d8a,Uid:3a913359766ec6f17dcf1421499ff892,Namespace:kube-system,Attempt:0,}" Sep 12 17:14:55.763390 kubelet[2289]: E0912 17:14:55.763302 2289 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.179.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-3-6-9297726d8a?timeout=10s\": dial tcp 168.119.179.98:6443: connect: connection refused" interval="800ms" Sep 12 17:14:55.972938 kubelet[2289]: I0912 17:14:55.972857 2289 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-3-6-9297726d8a" Sep 12 17:14:55.973587 kubelet[2289]: E0912 17:14:55.973514 2289 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://168.119.179.98:6443/api/v1/nodes\": dial tcp 168.119.179.98:6443: connect: connection refused" node="ci-4230-2-3-6-9297726d8a" Sep 12 17:14:56.200456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1630143585.mount: Deactivated successfully. Sep 12 17:14:56.212594 containerd[1496]: time="2025-09-12T17:14:56.211352104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:14:56.214261 containerd[1496]: time="2025-09-12T17:14:56.213686405Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:14:56.216009 containerd[1496]: time="2025-09-12T17:14:56.215928579Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Sep 12 17:14:56.217081 containerd[1496]: time="2025-09-12T17:14:56.216789327Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:14:56.219538 containerd[1496]: time="2025-09-12T17:14:56.219464379Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:14:56.221739 containerd[1496]: time="2025-09-12T17:14:56.221462015Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:14:56.221739 containerd[1496]: time="2025-09-12T17:14:56.221648593Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:14:56.225449 containerd[1496]: time="2025-09-12T17:14:56.225186715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:14:56.229240 containerd[1496]: time="2025-09-12T17:14:56.227869537Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 564.328824ms" Sep 12 17:14:56.230120 containerd[1496]: time="2025-09-12T17:14:56.230054913Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 577.206699ms" Sep 12 17:14:56.234923 containerd[1496]: time="2025-09-12T17:14:56.234841839Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 562.896595ms" Sep 12 17:14:56.423185 containerd[1496]: time="2025-09-12T17:14:56.422862419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:14:56.423185 containerd[1496]: time="2025-09-12T17:14:56.423032894Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:14:56.423185 containerd[1496]: time="2025-09-12T17:14:56.423049717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:14:56.423572 containerd[1496]: time="2025-09-12T17:14:56.423282078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:14:56.429240 containerd[1496]: time="2025-09-12T17:14:56.428329563Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:14:56.429240 containerd[1496]: time="2025-09-12T17:14:56.428443280Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:14:56.429240 containerd[1496]: time="2025-09-12T17:14:56.428457540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:14:56.433255 containerd[1496]: time="2025-09-12T17:14:56.430914771Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:14:56.433255 containerd[1496]: time="2025-09-12T17:14:56.431023241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:14:56.433255 containerd[1496]: time="2025-09-12T17:14:56.431044910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:14:56.433255 containerd[1496]: time="2025-09-12T17:14:56.431192033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:14:56.433583 containerd[1496]: time="2025-09-12T17:14:56.430040845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:14:56.471949 systemd[1]: Started cri-containerd-a6e605c4761181ccf6ec2224216a54e4210ec28e2f38f3bd2dc308cfae9a8915.scope - libcontainer container a6e605c4761181ccf6ec2224216a54e4210ec28e2f38f3bd2dc308cfae9a8915. Sep 12 17:14:56.484047 systemd[1]: Started cri-containerd-403161e05deaca108245b736e87aa5fc845b6052d7c2b85b9413fdd9f9700505.scope - libcontainer container 403161e05deaca108245b736e87aa5fc845b6052d7c2b85b9413fdd9f9700505. Sep 12 17:14:56.488667 systemd[1]: Started cri-containerd-c8d5c08535ea12743b9d96b65a53a3a1f354354d7b46d1ac22df83654acd84a8.scope - libcontainer container c8d5c08535ea12743b9d96b65a53a3a1f354354d7b46d1ac22df83654acd84a8. Sep 12 17:14:56.507022 kubelet[2289]: E0912 17:14:56.506948 2289 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://168.119.179.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-3-6-9297726d8a&limit=500&resourceVersion=0\": dial tcp 168.119.179.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 17:14:56.564414 kubelet[2289]: E0912 17:14:56.564277 2289 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.179.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-3-6-9297726d8a?timeout=10s\": dial tcp 168.119.179.98:6443: connect: connection refused" interval="1.6s" Sep 12 17:14:56.572286 containerd[1496]: time="2025-09-12T17:14:56.572021292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-2-3-6-9297726d8a,Uid:28c935feed4027c4ff640f80bfcbead3,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6e605c4761181ccf6ec2224216a54e4210ec28e2f38f3bd2dc308cfae9a8915\"" Sep 12 17:14:56.588362 containerd[1496]: time="2025-09-12T17:14:56.587905972Z" level=info msg="CreateContainer within sandbox \"a6e605c4761181ccf6ec2224216a54e4210ec28e2f38f3bd2dc308cfae9a8915\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:14:56.600941 containerd[1496]: time="2025-09-12T17:14:56.600297752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-2-3-6-9297726d8a,Uid:f18fb4cf9a2e4903396bd7315bc07717,Namespace:kube-system,Attempt:0,} returns sandbox id \"403161e05deaca108245b736e87aa5fc845b6052d7c2b85b9413fdd9f9700505\"" Sep 12 17:14:56.603104 containerd[1496]: time="2025-09-12T17:14:56.603009734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-2-3-6-9297726d8a,Uid:3a913359766ec6f17dcf1421499ff892,Namespace:kube-system,Attempt:0,} returns sandbox id \"c8d5c08535ea12743b9d96b65a53a3a1f354354d7b46d1ac22df83654acd84a8\"" Sep 12 17:14:56.612311 kubelet[2289]: E0912 17:14:56.612223 2289 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://168.119.179.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 168.119.179.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 17:14:56.614503 containerd[1496]: time="2025-09-12T17:14:56.614078248Z" level=info msg="CreateContainer within sandbox \"c8d5c08535ea12743b9d96b65a53a3a1f354354d7b46d1ac22df83654acd84a8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:14:56.616571 containerd[1496]: time="2025-09-12T17:14:56.616501993Z" level=info msg="CreateContainer within sandbox \"403161e05deaca108245b736e87aa5fc845b6052d7c2b85b9413fdd9f9700505\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:14:56.621865 containerd[1496]: time="2025-09-12T17:14:56.621752879Z" level=info msg="CreateContainer within sandbox \"a6e605c4761181ccf6ec2224216a54e4210ec28e2f38f3bd2dc308cfae9a8915\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"86526ceaacd044a988acbabbfa77cf55ec41cf061010bb1ec53aa15f7d572f83\"" Sep 12 17:14:56.623631 containerd[1496]: time="2025-09-12T17:14:56.623540987Z" level=info msg="StartContainer for \"86526ceaacd044a988acbabbfa77cf55ec41cf061010bb1ec53aa15f7d572f83\"" Sep 12 17:14:56.651318 containerd[1496]: time="2025-09-12T17:14:56.651187177Z" level=info msg="CreateContainer within sandbox \"403161e05deaca108245b736e87aa5fc845b6052d7c2b85b9413fdd9f9700505\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f1fd4f5f47f69db12876c08e671c90ecb143cae9d680f49f21f8358df5ad3686\"" Sep 12 17:14:56.653168 containerd[1496]: time="2025-09-12T17:14:56.653092607Z" level=info msg="StartContainer for \"f1fd4f5f47f69db12876c08e671c90ecb143cae9d680f49f21f8358df5ad3686\"" Sep 12 17:14:56.666142 containerd[1496]: time="2025-09-12T17:14:56.666045121Z" level=info msg="CreateContainer within sandbox \"c8d5c08535ea12743b9d96b65a53a3a1f354354d7b46d1ac22df83654acd84a8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6fed9c448a2ca1f91d613c6532f5470f8a6bf9a6171edca97d56df0006b374d1\"" Sep 12 17:14:56.668130 containerd[1496]: time="2025-09-12T17:14:56.668064267Z" level=info msg="StartContainer for \"6fed9c448a2ca1f91d613c6532f5470f8a6bf9a6171edca97d56df0006b374d1\"" Sep 12 17:14:56.669971 systemd[1]: Started cri-containerd-86526ceaacd044a988acbabbfa77cf55ec41cf061010bb1ec53aa15f7d572f83.scope - libcontainer container 86526ceaacd044a988acbabbfa77cf55ec41cf061010bb1ec53aa15f7d572f83. Sep 12 17:14:56.698827 kubelet[2289]: E0912 17:14:56.698702 2289 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://168.119.179.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 168.119.179.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 17:14:56.720830 systemd[1]: Started cri-containerd-6fed9c448a2ca1f91d613c6532f5470f8a6bf9a6171edca97d56df0006b374d1.scope - libcontainer container 6fed9c448a2ca1f91d613c6532f5470f8a6bf9a6171edca97d56df0006b374d1. Sep 12 17:14:56.740969 systemd[1]: Started cri-containerd-f1fd4f5f47f69db12876c08e671c90ecb143cae9d680f49f21f8358df5ad3686.scope - libcontainer container f1fd4f5f47f69db12876c08e671c90ecb143cae9d680f49f21f8358df5ad3686. Sep 12 17:14:56.780034 kubelet[2289]: I0912 17:14:56.779099 2289 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-3-6-9297726d8a" Sep 12 17:14:56.782488 kubelet[2289]: E0912 17:14:56.782412 2289 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://168.119.179.98:6443/api/v1/nodes\": dial tcp 168.119.179.98:6443: connect: connection refused" node="ci-4230-2-3-6-9297726d8a" Sep 12 17:14:56.783238 containerd[1496]: time="2025-09-12T17:14:56.783161015Z" level=info msg="StartContainer for \"86526ceaacd044a988acbabbfa77cf55ec41cf061010bb1ec53aa15f7d572f83\" returns successfully" Sep 12 17:14:56.803578 kubelet[2289]: E0912 17:14:56.803391 2289 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://168.119.179.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 168.119.179.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 17:14:56.845175 containerd[1496]: time="2025-09-12T17:14:56.844885152Z" level=info msg="StartContainer for \"f1fd4f5f47f69db12876c08e671c90ecb143cae9d680f49f21f8358df5ad3686\" returns successfully" Sep 12 17:14:56.854248 containerd[1496]: time="2025-09-12T17:14:56.852786055Z" level=info msg="StartContainer for \"6fed9c448a2ca1f91d613c6532f5470f8a6bf9a6171edca97d56df0006b374d1\" returns successfully" Sep 12 17:14:57.229471 kubelet[2289]: E0912 17:14:57.229403 2289 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-3-6-9297726d8a\" not found" node="ci-4230-2-3-6-9297726d8a" Sep 12 17:14:57.236368 kubelet[2289]: E0912 17:14:57.236307 2289 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-3-6-9297726d8a\" not found" node="ci-4230-2-3-6-9297726d8a" Sep 12 17:14:57.243384 kubelet[2289]: E0912 17:14:57.243322 2289 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-3-6-9297726d8a\" not found" node="ci-4230-2-3-6-9297726d8a" Sep 12 17:14:58.245689 kubelet[2289]: E0912 17:14:58.245631 2289 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-3-6-9297726d8a\" not found" node="ci-4230-2-3-6-9297726d8a" Sep 12 17:14:58.247091 kubelet[2289]: E0912 17:14:58.247045 2289 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-3-6-9297726d8a\" not found" node="ci-4230-2-3-6-9297726d8a" Sep 12 17:14:58.386247 kubelet[2289]: I0912 17:14:58.386180 2289 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-3-6-9297726d8a" Sep 12 17:14:59.507262 kubelet[2289]: E0912 17:14:59.507117 2289 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230-2-3-6-9297726d8a\" not found" node="ci-4230-2-3-6-9297726d8a" Sep 12 17:14:59.622019 kubelet[2289]: I0912 17:14:59.621888 2289 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230-2-3-6-9297726d8a" Sep 12 17:14:59.653534 kubelet[2289]: I0912 17:14:59.653429 2289 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-2-3-6-9297726d8a" Sep 12 17:14:59.687172 kubelet[2289]: E0912 17:14:59.687078 2289 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230-2-3-6-9297726d8a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230-2-3-6-9297726d8a" Sep 12 17:14:59.687172 kubelet[2289]: I0912 17:14:59.687154 2289 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-2-3-6-9297726d8a" Sep 12 17:14:59.693411 kubelet[2289]: E0912 17:14:59.693326 2289 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-2-3-6-9297726d8a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230-2-3-6-9297726d8a" Sep 12 17:14:59.693411 kubelet[2289]: I0912 17:14:59.693403 2289 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-2-3-6-9297726d8a" Sep 12 17:14:59.700224 kubelet[2289]: E0912 17:14:59.700058 2289 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-2-3-6-9297726d8a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230-2-3-6-9297726d8a" Sep 12 17:15:00.126681 kubelet[2289]: I0912 17:15:00.126589 2289 apiserver.go:52] "Watching apiserver" Sep 12 17:15:00.157987 kubelet[2289]: I0912 17:15:00.157914 2289 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:15:00.311923 kubelet[2289]: I0912 17:15:00.311866 2289 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-2-3-6-9297726d8a" Sep 12 17:15:01.532077 kubelet[2289]: I0912 17:15:01.531942 2289 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-2-3-6-9297726d8a" Sep 12 17:15:02.358623 systemd[1]: Reload requested from client PID 2578 ('systemctl') (unit session-7.scope)... Sep 12 17:15:02.359220 systemd[1]: Reloading... Sep 12 17:15:02.541274 zram_generator::config[2626]: No configuration found. Sep 12 17:15:02.674478 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:15:02.801602 systemd[1]: Reloading finished in 441 ms. Sep 12 17:15:02.839725 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:15:02.856724 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:15:02.857723 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:15:02.857833 systemd[1]: kubelet.service: Consumed 1.413s CPU time, 129.1M memory peak. Sep 12 17:15:02.870702 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:15:03.087723 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:15:03.101061 (kubelet)[2668]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:15:03.161832 kubelet[2668]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:15:03.161832 kubelet[2668]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:15:03.161832 kubelet[2668]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:15:03.161832 kubelet[2668]: I0912 17:15:03.161381 2668 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:15:03.177318 kubelet[2668]: I0912 17:15:03.176004 2668 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 17:15:03.177318 kubelet[2668]: I0912 17:15:03.176064 2668 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:15:03.177318 kubelet[2668]: I0912 17:15:03.176678 2668 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 17:15:03.179589 kubelet[2668]: I0912 17:15:03.179536 2668 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 12 17:15:03.185663 kubelet[2668]: I0912 17:15:03.185283 2668 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:15:03.192167 kubelet[2668]: E0912 17:15:03.192094 2668 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:15:03.192167 kubelet[2668]: I0912 17:15:03.192153 2668 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:15:03.197028 kubelet[2668]: I0912 17:15:03.196944 2668 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:15:03.197378 kubelet[2668]: I0912 17:15:03.197324 2668 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:15:03.197789 kubelet[2668]: I0912 17:15:03.197375 2668 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-2-3-6-9297726d8a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:15:03.197789 kubelet[2668]: I0912 17:15:03.197617 2668 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:15:03.197789 kubelet[2668]: I0912 17:15:03.197628 2668 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 17:15:03.197789 kubelet[2668]: I0912 17:15:03.197691 2668 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:15:03.203440 kubelet[2668]: I0912 17:15:03.203360 2668 kubelet.go:480] "Attempting to sync node with API server" Sep 12 17:15:03.203440 kubelet[2668]: I0912 17:15:03.203459 2668 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:15:03.203716 kubelet[2668]: I0912 17:15:03.203538 2668 kubelet.go:386] "Adding apiserver pod source" Sep 12 17:15:03.203716 kubelet[2668]: I0912 17:15:03.203558 2668 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:15:03.218266 kubelet[2668]: I0912 17:15:03.215913 2668 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 12 17:15:03.218740 kubelet[2668]: I0912 17:15:03.218706 2668 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 17:15:03.230695 kubelet[2668]: I0912 17:15:03.230649 2668 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:15:03.233261 kubelet[2668]: I0912 17:15:03.232549 2668 server.go:1289] "Started kubelet" Sep 12 17:15:03.243944 kubelet[2668]: I0912 17:15:03.243823 2668 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:15:03.255232 kubelet[2668]: I0912 17:15:03.254539 2668 server.go:317] "Adding debug handlers to kubelet server" Sep 12 17:15:03.257327 kubelet[2668]: I0912 17:15:03.249452 2668 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:15:03.271492 kubelet[2668]: I0912 17:15:03.247827 2668 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:15:03.281480 kubelet[2668]: I0912 17:15:03.281417 2668 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 17:15:03.289436 kubelet[2668]: I0912 17:15:03.233252 2668 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:15:03.295996 kubelet[2668]: I0912 17:15:03.295926 2668 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:15:03.309150 kubelet[2668]: I0912 17:15:03.298594 2668 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:15:03.309497 kubelet[2668]: I0912 17:15:03.298622 2668 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:15:03.316143 kubelet[2668]: I0912 17:15:03.313100 2668 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:15:03.316143 kubelet[2668]: I0912 17:15:03.313097 2668 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:15:03.316143 kubelet[2668]: E0912 17:15:03.313408 2668 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:15:03.326721 kubelet[2668]: I0912 17:15:03.326563 2668 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 17:15:03.328296 kubelet[2668]: I0912 17:15:03.328265 2668 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 17:15:03.328447 kubelet[2668]: I0912 17:15:03.328434 2668 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:15:03.328505 kubelet[2668]: I0912 17:15:03.328496 2668 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 17:15:03.328671 kubelet[2668]: E0912 17:15:03.328643 2668 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:15:03.331122 kubelet[2668]: I0912 17:15:03.331078 2668 factory.go:223] Registration of the containerd container factory successfully Sep 12 17:15:03.331373 kubelet[2668]: I0912 17:15:03.331359 2668 factory.go:223] Registration of the systemd container factory successfully Sep 12 17:15:03.368716 sudo[2705]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 12 17:15:03.370571 sudo[2705]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 12 17:15:03.434533 kubelet[2668]: E0912 17:15:03.430083 2668 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 17:15:03.454195 kubelet[2668]: I0912 17:15:03.454016 2668 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:15:03.454195 kubelet[2668]: I0912 17:15:03.454054 2668 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:15:03.454195 kubelet[2668]: I0912 17:15:03.454094 2668 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:15:03.455114 kubelet[2668]: I0912 17:15:03.454537 2668 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:15:03.455114 kubelet[2668]: I0912 17:15:03.454562 2668 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:15:03.455114 kubelet[2668]: I0912 17:15:03.454603 2668 policy_none.go:49] "None policy: Start" Sep 12 17:15:03.455114 kubelet[2668]: I0912 17:15:03.454617 2668 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:15:03.455114 kubelet[2668]: I0912 17:15:03.454631 2668 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:15:03.455114 kubelet[2668]: I0912 17:15:03.454783 2668 state_mem.go:75] "Updated machine memory state" Sep 12 17:15:03.466957 kubelet[2668]: E0912 17:15:03.466889 2668 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 17:15:03.467285 kubelet[2668]: I0912 17:15:03.467259 2668 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:15:03.467363 kubelet[2668]: I0912 17:15:03.467294 2668 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:15:03.472022 kubelet[2668]: I0912 17:15:03.471984 2668 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:15:03.479464 kubelet[2668]: E0912 17:15:03.479192 2668 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:15:03.513967 systemd[1]: Started sshd@7-168.119.179.98:22-199.45.155.100:60162.service - OpenSSH per-connection server daemon (199.45.155.100:60162). Sep 12 17:15:03.602781 kubelet[2668]: I0912 17:15:03.598390 2668 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-3-6-9297726d8a" Sep 12 17:15:03.620282 kubelet[2668]: I0912 17:15:03.619956 2668 kubelet_node_status.go:124] "Node was previously registered" node="ci-4230-2-3-6-9297726d8a" Sep 12 17:15:03.621879 kubelet[2668]: I0912 17:15:03.621286 2668 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230-2-3-6-9297726d8a" Sep 12 17:15:03.632933 kubelet[2668]: I0912 17:15:03.632867 2668 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-2-3-6-9297726d8a" Sep 12 17:15:03.638280 kubelet[2668]: I0912 17:15:03.637501 2668 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-2-3-6-9297726d8a" Sep 12 17:15:03.638280 kubelet[2668]: I0912 17:15:03.637726 2668 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-2-3-6-9297726d8a" Sep 12 17:15:03.659928 kubelet[2668]: E0912 17:15:03.659781 2668 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-2-3-6-9297726d8a\" already exists" pod="kube-system/kube-apiserver-ci-4230-2-3-6-9297726d8a" Sep 12 17:15:03.662039 kubelet[2668]: E0912 17:15:03.661986 2668 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230-2-3-6-9297726d8a\" already exists" pod="kube-system/kube-controller-manager-ci-4230-2-3-6-9297726d8a" Sep 12 17:15:03.718364 kubelet[2668]: I0912 17:15:03.718303 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f18fb4cf9a2e4903396bd7315bc07717-ca-certs\") pod \"kube-controller-manager-ci-4230-2-3-6-9297726d8a\" (UID: \"f18fb4cf9a2e4903396bd7315bc07717\") " pod="kube-system/kube-controller-manager-ci-4230-2-3-6-9297726d8a" Sep 12 17:15:03.718364 kubelet[2668]: I0912 17:15:03.718371 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/28c935feed4027c4ff640f80bfcbead3-ca-certs\") pod \"kube-apiserver-ci-4230-2-3-6-9297726d8a\" (UID: \"28c935feed4027c4ff640f80bfcbead3\") " pod="kube-system/kube-apiserver-ci-4230-2-3-6-9297726d8a" Sep 12 17:15:03.718678 kubelet[2668]: I0912 17:15:03.718397 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/28c935feed4027c4ff640f80bfcbead3-k8s-certs\") pod \"kube-apiserver-ci-4230-2-3-6-9297726d8a\" (UID: \"28c935feed4027c4ff640f80bfcbead3\") " pod="kube-system/kube-apiserver-ci-4230-2-3-6-9297726d8a" Sep 12 17:15:03.718678 kubelet[2668]: I0912 17:15:03.718421 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/28c935feed4027c4ff640f80bfcbead3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-2-3-6-9297726d8a\" (UID: \"28c935feed4027c4ff640f80bfcbead3\") " pod="kube-system/kube-apiserver-ci-4230-2-3-6-9297726d8a" Sep 12 17:15:03.718678 kubelet[2668]: I0912 17:15:03.718447 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f18fb4cf9a2e4903396bd7315bc07717-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-2-3-6-9297726d8a\" (UID: \"f18fb4cf9a2e4903396bd7315bc07717\") " pod="kube-system/kube-controller-manager-ci-4230-2-3-6-9297726d8a" Sep 12 17:15:03.718678 kubelet[2668]: I0912 17:15:03.718467 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f18fb4cf9a2e4903396bd7315bc07717-k8s-certs\") pod \"kube-controller-manager-ci-4230-2-3-6-9297726d8a\" (UID: \"f18fb4cf9a2e4903396bd7315bc07717\") " pod="kube-system/kube-controller-manager-ci-4230-2-3-6-9297726d8a" Sep 12 17:15:03.718678 kubelet[2668]: I0912 17:15:03.718486 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f18fb4cf9a2e4903396bd7315bc07717-kubeconfig\") pod \"kube-controller-manager-ci-4230-2-3-6-9297726d8a\" (UID: \"f18fb4cf9a2e4903396bd7315bc07717\") " pod="kube-system/kube-controller-manager-ci-4230-2-3-6-9297726d8a" Sep 12 17:15:03.718796 kubelet[2668]: I0912 17:15:03.718504 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f18fb4cf9a2e4903396bd7315bc07717-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-2-3-6-9297726d8a\" (UID: \"f18fb4cf9a2e4903396bd7315bc07717\") " pod="kube-system/kube-controller-manager-ci-4230-2-3-6-9297726d8a" Sep 12 17:15:03.718796 kubelet[2668]: I0912 17:15:03.718525 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3a913359766ec6f17dcf1421499ff892-kubeconfig\") pod \"kube-scheduler-ci-4230-2-3-6-9297726d8a\" (UID: \"3a913359766ec6f17dcf1421499ff892\") " pod="kube-system/kube-scheduler-ci-4230-2-3-6-9297726d8a" Sep 12 17:15:03.985739 sudo[2705]: pam_unix(sudo:session): session closed for user root Sep 12 17:15:04.205377 kubelet[2668]: I0912 17:15:04.204369 2668 apiserver.go:52] "Watching apiserver" Sep 12 17:15:04.310236 kubelet[2668]: I0912 17:15:04.310007 2668 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:15:04.381739 kubelet[2668]: I0912 17:15:04.381622 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-2-3-6-9297726d8a" podStartSLOduration=3.3815854659999998 podStartE2EDuration="3.381585466s" podCreationTimestamp="2025-09-12 17:15:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:15:04.364887601 +0000 UTC m=+1.255926241" watchObservedRunningTime="2025-09-12 17:15:04.381585466 +0000 UTC m=+1.272624106" Sep 12 17:15:04.408247 kubelet[2668]: I0912 17:15:04.407623 2668 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-2-3-6-9297726d8a" Sep 12 17:15:04.430222 kubelet[2668]: I0912 17:15:04.426056 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-2-3-6-9297726d8a" podStartSLOduration=1.426030186 podStartE2EDuration="1.426030186s" podCreationTimestamp="2025-09-12 17:15:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:15:04.385298599 +0000 UTC m=+1.276337279" watchObservedRunningTime="2025-09-12 17:15:04.426030186 +0000 UTC m=+1.317068826" Sep 12 17:15:04.430222 kubelet[2668]: E0912 17:15:04.429135 2668 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-2-3-6-9297726d8a\" already exists" pod="kube-system/kube-apiserver-ci-4230-2-3-6-9297726d8a" Sep 12 17:15:04.447759 kubelet[2668]: I0912 17:15:04.447651 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-2-3-6-9297726d8a" podStartSLOduration=4.447617021 podStartE2EDuration="4.447617021s" podCreationTimestamp="2025-09-12 17:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:15:04.427509921 +0000 UTC m=+1.318548642" watchObservedRunningTime="2025-09-12 17:15:04.447617021 +0000 UTC m=+1.338655661" Sep 12 17:15:05.929742 sudo[1743]: pam_unix(sudo:session): session closed for user root Sep 12 17:15:06.091255 sshd[1742]: Connection closed by 139.178.68.195 port 57146 Sep 12 17:15:06.091642 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Sep 12 17:15:06.098871 systemd[1]: sshd@6-168.119.179.98:22-139.178.68.195:57146.service: Deactivated successfully. Sep 12 17:15:06.104145 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:15:06.105382 systemd[1]: session-7.scope: Consumed 8.744s CPU time, 263.6M memory peak. Sep 12 17:15:06.107693 systemd-logind[1477]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:15:06.110578 systemd-logind[1477]: Removed session 7. Sep 12 17:15:06.444872 kubelet[2668]: I0912 17:15:06.444611 2668 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:15:06.447454 kubelet[2668]: I0912 17:15:06.447156 2668 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:15:06.448324 containerd[1496]: time="2025-09-12T17:15:06.446612812Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:15:07.243510 systemd[1]: Created slice kubepods-besteffort-pod3ac6d5cf_b522_4b16_8347_e41c52d386ae.slice - libcontainer container kubepods-besteffort-pod3ac6d5cf_b522_4b16_8347_e41c52d386ae.slice. Sep 12 17:15:07.293670 systemd[1]: Created slice kubepods-burstable-pod84c2cf6f_71dc_49a6_8f00_978ddfb08898.slice - libcontainer container kubepods-burstable-pod84c2cf6f_71dc_49a6_8f00_978ddfb08898.slice. Sep 12 17:15:07.351259 kubelet[2668]: I0912 17:15:07.349390 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/84c2cf6f-71dc-49a6-8f00-978ddfb08898-hostproc\") pod \"cilium-btwgg\" (UID: \"84c2cf6f-71dc-49a6-8f00-978ddfb08898\") " pod="kube-system/cilium-btwgg" Sep 12 17:15:07.351259 kubelet[2668]: I0912 17:15:07.349466 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/84c2cf6f-71dc-49a6-8f00-978ddfb08898-cni-path\") pod \"cilium-btwgg\" (UID: \"84c2cf6f-71dc-49a6-8f00-978ddfb08898\") " pod="kube-system/cilium-btwgg" Sep 12 17:15:07.351259 kubelet[2668]: I0912 17:15:07.349483 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/84c2cf6f-71dc-49a6-8f00-978ddfb08898-etc-cni-netd\") pod \"cilium-btwgg\" (UID: \"84c2cf6f-71dc-49a6-8f00-978ddfb08898\") " pod="kube-system/cilium-btwgg" Sep 12 17:15:07.351259 kubelet[2668]: I0912 17:15:07.349509 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/84c2cf6f-71dc-49a6-8f00-978ddfb08898-xtables-lock\") pod \"cilium-btwgg\" (UID: \"84c2cf6f-71dc-49a6-8f00-978ddfb08898\") " pod="kube-system/cilium-btwgg" Sep 12 17:15:07.351259 kubelet[2668]: I0912 17:15:07.349531 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/84c2cf6f-71dc-49a6-8f00-978ddfb08898-clustermesh-secrets\") pod \"cilium-btwgg\" (UID: \"84c2cf6f-71dc-49a6-8f00-978ddfb08898\") " pod="kube-system/cilium-btwgg" Sep 12 17:15:07.351259 kubelet[2668]: I0912 17:15:07.349549 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/84c2cf6f-71dc-49a6-8f00-978ddfb08898-cilium-config-path\") pod \"cilium-btwgg\" (UID: \"84c2cf6f-71dc-49a6-8f00-978ddfb08898\") " pod="kube-system/cilium-btwgg" Sep 12 17:15:07.351632 kubelet[2668]: I0912 17:15:07.349597 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/84c2cf6f-71dc-49a6-8f00-978ddfb08898-lib-modules\") pod \"cilium-btwgg\" (UID: \"84c2cf6f-71dc-49a6-8f00-978ddfb08898\") " pod="kube-system/cilium-btwgg" Sep 12 17:15:07.351632 kubelet[2668]: I0912 17:15:07.349614 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/84c2cf6f-71dc-49a6-8f00-978ddfb08898-host-proc-sys-kernel\") pod \"cilium-btwgg\" (UID: \"84c2cf6f-71dc-49a6-8f00-978ddfb08898\") " pod="kube-system/cilium-btwgg" Sep 12 17:15:07.351632 kubelet[2668]: I0912 17:15:07.349634 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3ac6d5cf-b522-4b16-8347-e41c52d386ae-kube-proxy\") pod \"kube-proxy-n9qs5\" (UID: \"3ac6d5cf-b522-4b16-8347-e41c52d386ae\") " pod="kube-system/kube-proxy-n9qs5" Sep 12 17:15:07.351632 kubelet[2668]: I0912 17:15:07.349650 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/84c2cf6f-71dc-49a6-8f00-978ddfb08898-cilium-cgroup\") pod \"cilium-btwgg\" (UID: \"84c2cf6f-71dc-49a6-8f00-978ddfb08898\") " pod="kube-system/cilium-btwgg" Sep 12 17:15:07.351632 kubelet[2668]: I0912 17:15:07.349705 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/84c2cf6f-71dc-49a6-8f00-978ddfb08898-host-proc-sys-net\") pod \"cilium-btwgg\" (UID: \"84c2cf6f-71dc-49a6-8f00-978ddfb08898\") " pod="kube-system/cilium-btwgg" Sep 12 17:15:07.351632 kubelet[2668]: I0912 17:15:07.349731 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/84c2cf6f-71dc-49a6-8f00-978ddfb08898-hubble-tls\") pod \"cilium-btwgg\" (UID: \"84c2cf6f-71dc-49a6-8f00-978ddfb08898\") " pod="kube-system/cilium-btwgg" Sep 12 17:15:07.351989 kubelet[2668]: I0912 17:15:07.349755 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8bt2\" (UniqueName: \"kubernetes.io/projected/84c2cf6f-71dc-49a6-8f00-978ddfb08898-kube-api-access-r8bt2\") pod \"cilium-btwgg\" (UID: \"84c2cf6f-71dc-49a6-8f00-978ddfb08898\") " pod="kube-system/cilium-btwgg" Sep 12 17:15:07.351989 kubelet[2668]: I0912 17:15:07.349774 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ac6d5cf-b522-4b16-8347-e41c52d386ae-lib-modules\") pod \"kube-proxy-n9qs5\" (UID: \"3ac6d5cf-b522-4b16-8347-e41c52d386ae\") " pod="kube-system/kube-proxy-n9qs5" Sep 12 17:15:07.351989 kubelet[2668]: I0912 17:15:07.349790 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6jwv\" (UniqueName: \"kubernetes.io/projected/3ac6d5cf-b522-4b16-8347-e41c52d386ae-kube-api-access-l6jwv\") pod \"kube-proxy-n9qs5\" (UID: \"3ac6d5cf-b522-4b16-8347-e41c52d386ae\") " pod="kube-system/kube-proxy-n9qs5" Sep 12 17:15:07.351989 kubelet[2668]: I0912 17:15:07.349808 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/84c2cf6f-71dc-49a6-8f00-978ddfb08898-cilium-run\") pod \"cilium-btwgg\" (UID: \"84c2cf6f-71dc-49a6-8f00-978ddfb08898\") " pod="kube-system/cilium-btwgg" Sep 12 17:15:07.351989 kubelet[2668]: I0912 17:15:07.349840 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ac6d5cf-b522-4b16-8347-e41c52d386ae-xtables-lock\") pod \"kube-proxy-n9qs5\" (UID: \"3ac6d5cf-b522-4b16-8347-e41c52d386ae\") " pod="kube-system/kube-proxy-n9qs5" Sep 12 17:15:07.352096 kubelet[2668]: I0912 17:15:07.349865 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/84c2cf6f-71dc-49a6-8f00-978ddfb08898-bpf-maps\") pod \"cilium-btwgg\" (UID: \"84c2cf6f-71dc-49a6-8f00-978ddfb08898\") " pod="kube-system/cilium-btwgg" Sep 12 17:15:07.559345 containerd[1496]: time="2025-09-12T17:15:07.558502949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n9qs5,Uid:3ac6d5cf-b522-4b16-8347-e41c52d386ae,Namespace:kube-system,Attempt:0,}" Sep 12 17:15:07.598813 containerd[1496]: time="2025-09-12T17:15:07.597590467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:15:07.598813 containerd[1496]: time="2025-09-12T17:15:07.597715892Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:15:07.598813 containerd[1496]: time="2025-09-12T17:15:07.597748959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:15:07.598813 containerd[1496]: time="2025-09-12T17:15:07.597930309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:15:07.607870 containerd[1496]: time="2025-09-12T17:15:07.607196760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-btwgg,Uid:84c2cf6f-71dc-49a6-8f00-978ddfb08898,Namespace:kube-system,Attempt:0,}" Sep 12 17:15:07.626554 systemd[1]: Started cri-containerd-430c0f8941c9d4348a8af873baede38c34954e0ed16deaecd8bffd0f6de0c910.scope - libcontainer container 430c0f8941c9d4348a8af873baede38c34954e0ed16deaecd8bffd0f6de0c910. Sep 12 17:15:07.663052 containerd[1496]: time="2025-09-12T17:15:07.662892341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:15:07.663052 containerd[1496]: time="2025-09-12T17:15:07.662988661Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:15:07.663348 containerd[1496]: time="2025-09-12T17:15:07.663003514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:15:07.663348 containerd[1496]: time="2025-09-12T17:15:07.663147433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:15:07.702608 systemd[1]: Started cri-containerd-faaef948cc03b13ea66467decdcbc8989ffe1433124cc12acdb62a6cdfb7312e.scope - libcontainer container faaef948cc03b13ea66467decdcbc8989ffe1433124cc12acdb62a6cdfb7312e. Sep 12 17:15:07.715806 systemd[1]: Created slice kubepods-besteffort-pod4491bb11_dbb1_464d_955a_f7ca7d7c4aab.slice - libcontainer container kubepods-besteffort-pod4491bb11_dbb1_464d_955a_f7ca7d7c4aab.slice. Sep 12 17:15:07.758252 kubelet[2668]: I0912 17:15:07.754155 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4vfs\" (UniqueName: \"kubernetes.io/projected/4491bb11-dbb1-464d-955a-f7ca7d7c4aab-kube-api-access-q4vfs\") pod \"cilium-operator-6c4d7847fc-m7tx2\" (UID: \"4491bb11-dbb1-464d-955a-f7ca7d7c4aab\") " pod="kube-system/cilium-operator-6c4d7847fc-m7tx2" Sep 12 17:15:07.758252 kubelet[2668]: I0912 17:15:07.756254 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4491bb11-dbb1-464d-955a-f7ca7d7c4aab-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-m7tx2\" (UID: \"4491bb11-dbb1-464d-955a-f7ca7d7c4aab\") " pod="kube-system/cilium-operator-6c4d7847fc-m7tx2" Sep 12 17:15:07.788452 containerd[1496]: time="2025-09-12T17:15:07.788377362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n9qs5,Uid:3ac6d5cf-b522-4b16-8347-e41c52d386ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"430c0f8941c9d4348a8af873baede38c34954e0ed16deaecd8bffd0f6de0c910\"" Sep 12 17:15:07.814876 containerd[1496]: time="2025-09-12T17:15:07.814454523Z" level=info msg="CreateContainer within sandbox \"430c0f8941c9d4348a8af873baede38c34954e0ed16deaecd8bffd0f6de0c910\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:15:07.823751 containerd[1496]: time="2025-09-12T17:15:07.823407753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-btwgg,Uid:84c2cf6f-71dc-49a6-8f00-978ddfb08898,Namespace:kube-system,Attempt:0,} returns sandbox id \"faaef948cc03b13ea66467decdcbc8989ffe1433124cc12acdb62a6cdfb7312e\"" Sep 12 17:15:07.828674 containerd[1496]: time="2025-09-12T17:15:07.828052568Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 12 17:15:07.865518 containerd[1496]: time="2025-09-12T17:15:07.864788855Z" level=info msg="CreateContainer within sandbox \"430c0f8941c9d4348a8af873baede38c34954e0ed16deaecd8bffd0f6de0c910\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"47153a1c2e3dcabfbb843bf34cc548b3de46216d2ed4cad0f77cfc6cb155ecf0\"" Sep 12 17:15:07.869246 containerd[1496]: time="2025-09-12T17:15:07.868580162Z" level=info msg="StartContainer for \"47153a1c2e3dcabfbb843bf34cc548b3de46216d2ed4cad0f77cfc6cb155ecf0\"" Sep 12 17:15:07.922558 systemd[1]: Started cri-containerd-47153a1c2e3dcabfbb843bf34cc548b3de46216d2ed4cad0f77cfc6cb155ecf0.scope - libcontainer container 47153a1c2e3dcabfbb843bf34cc548b3de46216d2ed4cad0f77cfc6cb155ecf0. Sep 12 17:15:07.969804 containerd[1496]: time="2025-09-12T17:15:07.968974519Z" level=info msg="StartContainer for \"47153a1c2e3dcabfbb843bf34cc548b3de46216d2ed4cad0f77cfc6cb155ecf0\" returns successfully" Sep 12 17:15:08.030077 containerd[1496]: time="2025-09-12T17:15:08.029258882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-m7tx2,Uid:4491bb11-dbb1-464d-955a-f7ca7d7c4aab,Namespace:kube-system,Attempt:0,}" Sep 12 17:15:08.072727 containerd[1496]: time="2025-09-12T17:15:08.070063950Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:15:08.072727 containerd[1496]: time="2025-09-12T17:15:08.070784275Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:15:08.072727 containerd[1496]: time="2025-09-12T17:15:08.070825988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:15:08.072727 containerd[1496]: time="2025-09-12T17:15:08.071055808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:15:08.095569 systemd[1]: Started cri-containerd-d7431605835e3ddbe30319e38c959ddb6783e277c6916cdbda2902d46321cd2b.scope - libcontainer container d7431605835e3ddbe30319e38c959ddb6783e277c6916cdbda2902d46321cd2b. Sep 12 17:15:08.157638 containerd[1496]: time="2025-09-12T17:15:08.157266315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-m7tx2,Uid:4491bb11-dbb1-464d-955a-f7ca7d7c4aab,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7431605835e3ddbe30319e38c959ddb6783e277c6916cdbda2902d46321cd2b\"" Sep 12 17:15:08.443869 kubelet[2668]: I0912 17:15:08.443226 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-n9qs5" podStartSLOduration=1.443175565 podStartE2EDuration="1.443175565s" podCreationTimestamp="2025-09-12 17:15:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:15:08.442882935 +0000 UTC m=+5.333921535" watchObservedRunningTime="2025-09-12 17:15:08.443175565 +0000 UTC m=+5.334214205" Sep 12 17:15:11.913235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3408286389.mount: Deactivated successfully. Sep 12 17:15:13.421741 containerd[1496]: time="2025-09-12T17:15:13.421651897Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:15:13.424195 containerd[1496]: time="2025-09-12T17:15:13.424025878Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 12 17:15:13.426186 containerd[1496]: time="2025-09-12T17:15:13.426088072Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:15:13.431023 containerd[1496]: time="2025-09-12T17:15:13.430942699Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.602632642s" Sep 12 17:15:13.431023 containerd[1496]: time="2025-09-12T17:15:13.431018184Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 12 17:15:13.437459 containerd[1496]: time="2025-09-12T17:15:13.436628063Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 12 17:15:13.444776 containerd[1496]: time="2025-09-12T17:15:13.444443462Z" level=info msg="CreateContainer within sandbox \"faaef948cc03b13ea66467decdcbc8989ffe1433124cc12acdb62a6cdfb7312e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 17:15:13.470030 containerd[1496]: time="2025-09-12T17:15:13.469920834Z" level=info msg="CreateContainer within sandbox \"faaef948cc03b13ea66467decdcbc8989ffe1433124cc12acdb62a6cdfb7312e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ba4cae6079b9167c3ad185f817f6e57aa03551086ae1c465c75116103cf7127f\"" Sep 12 17:15:13.473064 containerd[1496]: time="2025-09-12T17:15:13.472453271Z" level=info msg="StartContainer for \"ba4cae6079b9167c3ad185f817f6e57aa03551086ae1c465c75116103cf7127f\"" Sep 12 17:15:13.516619 systemd[1]: Started cri-containerd-ba4cae6079b9167c3ad185f817f6e57aa03551086ae1c465c75116103cf7127f.scope - libcontainer container ba4cae6079b9167c3ad185f817f6e57aa03551086ae1c465c75116103cf7127f. Sep 12 17:15:13.557041 containerd[1496]: time="2025-09-12T17:15:13.556912195Z" level=info msg="StartContainer for \"ba4cae6079b9167c3ad185f817f6e57aa03551086ae1c465c75116103cf7127f\" returns successfully" Sep 12 17:15:13.589408 systemd[1]: cri-containerd-ba4cae6079b9167c3ad185f817f6e57aa03551086ae1c465c75116103cf7127f.scope: Deactivated successfully. Sep 12 17:15:13.790619 containerd[1496]: time="2025-09-12T17:15:13.788821115Z" level=info msg="shim disconnected" id=ba4cae6079b9167c3ad185f817f6e57aa03551086ae1c465c75116103cf7127f namespace=k8s.io Sep 12 17:15:13.790619 containerd[1496]: time="2025-09-12T17:15:13.788924417Z" level=warning msg="cleaning up after shim disconnected" id=ba4cae6079b9167c3ad185f817f6e57aa03551086ae1c465c75116103cf7127f namespace=k8s.io Sep 12 17:15:13.790619 containerd[1496]: time="2025-09-12T17:15:13.788933742Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:15:14.465871 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba4cae6079b9167c3ad185f817f6e57aa03551086ae1c465c75116103cf7127f-rootfs.mount: Deactivated successfully. Sep 12 17:15:14.468997 containerd[1496]: time="2025-09-12T17:15:14.467480565Z" level=info msg="CreateContainer within sandbox \"faaef948cc03b13ea66467decdcbc8989ffe1433124cc12acdb62a6cdfb7312e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 17:15:14.501732 containerd[1496]: time="2025-09-12T17:15:14.501279126Z" level=info msg="CreateContainer within sandbox \"faaef948cc03b13ea66467decdcbc8989ffe1433124cc12acdb62a6cdfb7312e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c5bc51e8b1f2b94d707ddcd35b9f43d2b7eed8b605c4deb5528908dff126293c\"" Sep 12 17:15:14.505516 containerd[1496]: time="2025-09-12T17:15:14.505453738Z" level=info msg="StartContainer for \"c5bc51e8b1f2b94d707ddcd35b9f43d2b7eed8b605c4deb5528908dff126293c\"" Sep 12 17:15:14.550499 systemd[1]: Started cri-containerd-c5bc51e8b1f2b94d707ddcd35b9f43d2b7eed8b605c4deb5528908dff126293c.scope - libcontainer container c5bc51e8b1f2b94d707ddcd35b9f43d2b7eed8b605c4deb5528908dff126293c. Sep 12 17:15:14.591547 containerd[1496]: time="2025-09-12T17:15:14.591467444Z" level=info msg="StartContainer for \"c5bc51e8b1f2b94d707ddcd35b9f43d2b7eed8b605c4deb5528908dff126293c\" returns successfully" Sep 12 17:15:14.611033 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:15:14.612105 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:15:14.612539 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:15:14.618886 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:15:14.619170 systemd[1]: cri-containerd-c5bc51e8b1f2b94d707ddcd35b9f43d2b7eed8b605c4deb5528908dff126293c.scope: Deactivated successfully. Sep 12 17:15:14.657083 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:15:14.661044 containerd[1496]: time="2025-09-12T17:15:14.660941313Z" level=info msg="shim disconnected" id=c5bc51e8b1f2b94d707ddcd35b9f43d2b7eed8b605c4deb5528908dff126293c namespace=k8s.io Sep 12 17:15:14.661044 containerd[1496]: time="2025-09-12T17:15:14.661022999Z" level=warning msg="cleaning up after shim disconnected" id=c5bc51e8b1f2b94d707ddcd35b9f43d2b7eed8b605c4deb5528908dff126293c namespace=k8s.io Sep 12 17:15:14.661044 containerd[1496]: time="2025-09-12T17:15:14.661032925Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:15:15.462717 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c5bc51e8b1f2b94d707ddcd35b9f43d2b7eed8b605c4deb5528908dff126293c-rootfs.mount: Deactivated successfully. Sep 12 17:15:15.490426 containerd[1496]: time="2025-09-12T17:15:15.490337209Z" level=info msg="CreateContainer within sandbox \"faaef948cc03b13ea66467decdcbc8989ffe1433124cc12acdb62a6cdfb7312e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 17:15:15.542316 containerd[1496]: time="2025-09-12T17:15:15.541982590Z" level=info msg="CreateContainer within sandbox \"faaef948cc03b13ea66467decdcbc8989ffe1433124cc12acdb62a6cdfb7312e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"66000d8fecd9877d4ec6e25d3acd7d2df35d0297424a9d7237dd79255e6cc4f5\"" Sep 12 17:15:15.543997 containerd[1496]: time="2025-09-12T17:15:15.543802492Z" level=info msg="StartContainer for \"66000d8fecd9877d4ec6e25d3acd7d2df35d0297424a9d7237dd79255e6cc4f5\"" Sep 12 17:15:15.573424 containerd[1496]: time="2025-09-12T17:15:15.573343788Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:15:15.576425 containerd[1496]: time="2025-09-12T17:15:15.576316031Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 12 17:15:15.577912 containerd[1496]: time="2025-09-12T17:15:15.577835131Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:15:15.581661 containerd[1496]: time="2025-09-12T17:15:15.581477856Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.144600615s" Sep 12 17:15:15.583019 containerd[1496]: time="2025-09-12T17:15:15.581656993Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 12 17:15:15.603172 containerd[1496]: time="2025-09-12T17:15:15.602645355Z" level=info msg="CreateContainer within sandbox \"d7431605835e3ddbe30319e38c959ddb6783e277c6916cdbda2902d46321cd2b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 12 17:15:15.609883 systemd[1]: Started cri-containerd-66000d8fecd9877d4ec6e25d3acd7d2df35d0297424a9d7237dd79255e6cc4f5.scope - libcontainer container 66000d8fecd9877d4ec6e25d3acd7d2df35d0297424a9d7237dd79255e6cc4f5. Sep 12 17:15:15.632838 containerd[1496]: time="2025-09-12T17:15:15.632307476Z" level=info msg="CreateContainer within sandbox \"d7431605835e3ddbe30319e38c959ddb6783e277c6916cdbda2902d46321cd2b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a444c7f034b0f1a15ccf1dae6ec5702c388917d49ab07a8d7b5d80156d235c37\"" Sep 12 17:15:15.634117 containerd[1496]: time="2025-09-12T17:15:15.633945800Z" level=info msg="StartContainer for \"a444c7f034b0f1a15ccf1dae6ec5702c388917d49ab07a8d7b5d80156d235c37\"" Sep 12 17:15:15.684114 systemd[1]: cri-containerd-66000d8fecd9877d4ec6e25d3acd7d2df35d0297424a9d7237dd79255e6cc4f5.scope: Deactivated successfully. Sep 12 17:15:15.687080 containerd[1496]: time="2025-09-12T17:15:15.686896205Z" level=info msg="StartContainer for \"66000d8fecd9877d4ec6e25d3acd7d2df35d0297424a9d7237dd79255e6cc4f5\" returns successfully" Sep 12 17:15:15.694837 systemd[1]: Started cri-containerd-a444c7f034b0f1a15ccf1dae6ec5702c388917d49ab07a8d7b5d80156d235c37.scope - libcontainer container a444c7f034b0f1a15ccf1dae6ec5702c388917d49ab07a8d7b5d80156d235c37. Sep 12 17:15:15.760271 containerd[1496]: time="2025-09-12T17:15:15.759365939Z" level=info msg="shim disconnected" id=66000d8fecd9877d4ec6e25d3acd7d2df35d0297424a9d7237dd79255e6cc4f5 namespace=k8s.io Sep 12 17:15:15.760271 containerd[1496]: time="2025-09-12T17:15:15.759491007Z" level=warning msg="cleaning up after shim disconnected" id=66000d8fecd9877d4ec6e25d3acd7d2df35d0297424a9d7237dd79255e6cc4f5 namespace=k8s.io Sep 12 17:15:15.760271 containerd[1496]: time="2025-09-12T17:15:15.759501372Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:15:15.771839 containerd[1496]: time="2025-09-12T17:15:15.770805511Z" level=info msg="StartContainer for \"a444c7f034b0f1a15ccf1dae6ec5702c388917d49ab07a8d7b5d80156d235c37\" returns successfully" Sep 12 17:15:16.464311 systemd[1]: run-containerd-runc-k8s.io-66000d8fecd9877d4ec6e25d3acd7d2df35d0297424a9d7237dd79255e6cc4f5-runc.c42t3v.mount: Deactivated successfully. Sep 12 17:15:16.464743 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66000d8fecd9877d4ec6e25d3acd7d2df35d0297424a9d7237dd79255e6cc4f5-rootfs.mount: Deactivated successfully. Sep 12 17:15:16.488635 containerd[1496]: time="2025-09-12T17:15:16.488565618Z" level=info msg="CreateContainer within sandbox \"faaef948cc03b13ea66467decdcbc8989ffe1433124cc12acdb62a6cdfb7312e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 17:15:16.535294 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount177040240.mount: Deactivated successfully. Sep 12 17:15:16.542013 containerd[1496]: time="2025-09-12T17:15:16.541365883Z" level=info msg="CreateContainer within sandbox \"faaef948cc03b13ea66467decdcbc8989ffe1433124cc12acdb62a6cdfb7312e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7a31710100fc667ef58a2366276c95ed81740c38afdd0a10d5e72b70d87fd924\"" Sep 12 17:15:16.544600 containerd[1496]: time="2025-09-12T17:15:16.544530065Z" level=info msg="StartContainer for \"7a31710100fc667ef58a2366276c95ed81740c38afdd0a10d5e72b70d87fd924\"" Sep 12 17:15:16.624596 systemd[1]: Started cri-containerd-7a31710100fc667ef58a2366276c95ed81740c38afdd0a10d5e72b70d87fd924.scope - libcontainer container 7a31710100fc667ef58a2366276c95ed81740c38afdd0a10d5e72b70d87fd924. Sep 12 17:15:16.709133 containerd[1496]: time="2025-09-12T17:15:16.708930415Z" level=info msg="StartContainer for \"7a31710100fc667ef58a2366276c95ed81740c38afdd0a10d5e72b70d87fd924\" returns successfully" Sep 12 17:15:16.714485 systemd[1]: cri-containerd-7a31710100fc667ef58a2366276c95ed81740c38afdd0a10d5e72b70d87fd924.scope: Deactivated successfully. Sep 12 17:15:16.776123 containerd[1496]: time="2025-09-12T17:15:16.775739581Z" level=info msg="shim disconnected" id=7a31710100fc667ef58a2366276c95ed81740c38afdd0a10d5e72b70d87fd924 namespace=k8s.io Sep 12 17:15:16.776123 containerd[1496]: time="2025-09-12T17:15:16.775827306Z" level=warning msg="cleaning up after shim disconnected" id=7a31710100fc667ef58a2366276c95ed81740c38afdd0a10d5e72b70d87fd924 namespace=k8s.io Sep 12 17:15:16.776123 containerd[1496]: time="2025-09-12T17:15:16.775836911Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:15:17.462996 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a31710100fc667ef58a2366276c95ed81740c38afdd0a10d5e72b70d87fd924-rootfs.mount: Deactivated successfully. Sep 12 17:15:17.508867 containerd[1496]: time="2025-09-12T17:15:17.508739446Z" level=info msg="CreateContainer within sandbox \"faaef948cc03b13ea66467decdcbc8989ffe1433124cc12acdb62a6cdfb7312e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 17:15:17.546410 kubelet[2668]: I0912 17:15:17.544311 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-m7tx2" podStartSLOduration=3.12118246 podStartE2EDuration="10.544275606s" podCreationTimestamp="2025-09-12 17:15:07 +0000 UTC" firstStartedPulling="2025-09-12 17:15:08.162285214 +0000 UTC m=+5.053323814" lastFinishedPulling="2025-09-12 17:15:15.58537832 +0000 UTC m=+12.476416960" observedRunningTime="2025-09-12 17:15:16.675003425 +0000 UTC m=+13.566042065" watchObservedRunningTime="2025-09-12 17:15:17.544275606 +0000 UTC m=+14.435314286" Sep 12 17:15:17.557962 containerd[1496]: time="2025-09-12T17:15:17.557498131Z" level=info msg="CreateContainer within sandbox \"faaef948cc03b13ea66467decdcbc8989ffe1433124cc12acdb62a6cdfb7312e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8eac21fa95e339904c2665a02e1cc46f0640e80bf8304f1af0caa38e98962a9c\"" Sep 12 17:15:17.562414 containerd[1496]: time="2025-09-12T17:15:17.561466865Z" level=info msg="StartContainer for \"8eac21fa95e339904c2665a02e1cc46f0640e80bf8304f1af0caa38e98962a9c\"" Sep 12 17:15:17.613745 systemd[1]: Started cri-containerd-8eac21fa95e339904c2665a02e1cc46f0640e80bf8304f1af0caa38e98962a9c.scope - libcontainer container 8eac21fa95e339904c2665a02e1cc46f0640e80bf8304f1af0caa38e98962a9c. Sep 12 17:15:17.662403 containerd[1496]: time="2025-09-12T17:15:17.662007989Z" level=info msg="StartContainer for \"8eac21fa95e339904c2665a02e1cc46f0640e80bf8304f1af0caa38e98962a9c\" returns successfully" Sep 12 17:15:17.843014 kubelet[2668]: I0912 17:15:17.841613 2668 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 17:15:17.905828 systemd[1]: Created slice kubepods-burstable-pod452b5b14_8820_4668_9ed9_1183d6ef620c.slice - libcontainer container kubepods-burstable-pod452b5b14_8820_4668_9ed9_1183d6ef620c.slice. Sep 12 17:15:17.922055 systemd[1]: Created slice kubepods-burstable-pod7820e066_f293_439c_bccf_d565152625b8.slice - libcontainer container kubepods-burstable-pod7820e066_f293_439c_bccf_d565152625b8.slice. Sep 12 17:15:17.946537 kubelet[2668]: I0912 17:15:17.946476 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7820e066-f293-439c-bccf-d565152625b8-config-volume\") pod \"coredns-674b8bbfcf-thtsj\" (UID: \"7820e066-f293-439c-bccf-d565152625b8\") " pod="kube-system/coredns-674b8bbfcf-thtsj" Sep 12 17:15:17.946537 kubelet[2668]: I0912 17:15:17.946538 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2m2t\" (UniqueName: \"kubernetes.io/projected/7820e066-f293-439c-bccf-d565152625b8-kube-api-access-s2m2t\") pod \"coredns-674b8bbfcf-thtsj\" (UID: \"7820e066-f293-439c-bccf-d565152625b8\") " pod="kube-system/coredns-674b8bbfcf-thtsj" Sep 12 17:15:17.946768 kubelet[2668]: I0912 17:15:17.946565 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/452b5b14-8820-4668-9ed9-1183d6ef620c-config-volume\") pod \"coredns-674b8bbfcf-7wsgr\" (UID: \"452b5b14-8820-4668-9ed9-1183d6ef620c\") " pod="kube-system/coredns-674b8bbfcf-7wsgr" Sep 12 17:15:17.946768 kubelet[2668]: I0912 17:15:17.946582 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q645f\" (UniqueName: \"kubernetes.io/projected/452b5b14-8820-4668-9ed9-1183d6ef620c-kube-api-access-q645f\") pod \"coredns-674b8bbfcf-7wsgr\" (UID: \"452b5b14-8820-4668-9ed9-1183d6ef620c\") " pod="kube-system/coredns-674b8bbfcf-7wsgr" Sep 12 17:15:18.216743 containerd[1496]: time="2025-09-12T17:15:18.216673885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7wsgr,Uid:452b5b14-8820-4668-9ed9-1183d6ef620c,Namespace:kube-system,Attempt:0,}" Sep 12 17:15:18.230261 containerd[1496]: time="2025-09-12T17:15:18.228851412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-thtsj,Uid:7820e066-f293-439c-bccf-d565152625b8,Namespace:kube-system,Attempt:0,}" Sep 12 17:15:18.767114 sshd[2709]: Connection closed by 199.45.155.100 port 60162 [preauth] Sep 12 17:15:18.770453 systemd[1]: sshd@7-168.119.179.98:22-199.45.155.100:60162.service: Deactivated successfully. Sep 12 17:15:19.755128 systemd-networkd[1395]: cilium_host: Link UP Sep 12 17:15:19.758149 systemd-networkd[1395]: cilium_net: Link UP Sep 12 17:15:19.759280 systemd-networkd[1395]: cilium_net: Gained carrier Sep 12 17:15:19.759523 systemd-networkd[1395]: cilium_host: Gained carrier Sep 12 17:15:19.793516 systemd-networkd[1395]: cilium_net: Gained IPv6LL Sep 12 17:15:19.905580 systemd-networkd[1395]: cilium_vxlan: Link UP Sep 12 17:15:19.905591 systemd-networkd[1395]: cilium_vxlan: Gained carrier Sep 12 17:15:20.131390 systemd-networkd[1395]: cilium_host: Gained IPv6LL Sep 12 17:15:20.251263 kernel: NET: Registered PF_ALG protocol family Sep 12 17:15:21.139044 systemd-networkd[1395]: lxc_health: Link UP Sep 12 17:15:21.140091 systemd-networkd[1395]: lxc_health: Gained carrier Sep 12 17:15:21.343621 systemd-networkd[1395]: lxcf1c157becb42: Link UP Sep 12 17:15:21.353435 kernel: eth0: renamed from tmp53a23 Sep 12 17:15:21.359869 systemd-networkd[1395]: lxcf1c157becb42: Gained carrier Sep 12 17:15:21.649254 kubelet[2668]: I0912 17:15:21.648573 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-btwgg" podStartSLOduration=9.041651297 podStartE2EDuration="14.648547812s" podCreationTimestamp="2025-09-12 17:15:07 +0000 UTC" firstStartedPulling="2025-09-12 17:15:07.827252864 +0000 UTC m=+4.718291504" lastFinishedPulling="2025-09-12 17:15:13.434149379 +0000 UTC m=+10.325188019" observedRunningTime="2025-09-12 17:15:18.536421178 +0000 UTC m=+15.427459818" watchObservedRunningTime="2025-09-12 17:15:21.648547812 +0000 UTC m=+18.539586452" Sep 12 17:15:21.800078 systemd-networkd[1395]: lxcb18add60de2b: Link UP Sep 12 17:15:21.804347 kernel: eth0: renamed from tmpdc6c8 Sep 12 17:15:21.816107 systemd-networkd[1395]: lxcb18add60de2b: Gained carrier Sep 12 17:15:21.820696 systemd-networkd[1395]: cilium_vxlan: Gained IPv6LL Sep 12 17:15:22.584763 systemd-networkd[1395]: lxc_health: Gained IPv6LL Sep 12 17:15:23.163561 systemd-networkd[1395]: lxcb18add60de2b: Gained IPv6LL Sep 12 17:15:23.224389 systemd-networkd[1395]: lxcf1c157becb42: Gained IPv6LL Sep 12 17:15:26.181230 containerd[1496]: time="2025-09-12T17:15:26.179872554Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:15:26.181230 containerd[1496]: time="2025-09-12T17:15:26.179964023Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:15:26.181230 containerd[1496]: time="2025-09-12T17:15:26.179981229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:15:26.181230 containerd[1496]: time="2025-09-12T17:15:26.180105028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:15:26.214258 containerd[1496]: time="2025-09-12T17:15:26.214031995Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:15:26.214258 containerd[1496]: time="2025-09-12T17:15:26.214118823Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:15:26.214258 containerd[1496]: time="2025-09-12T17:15:26.214131507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:15:26.214258 containerd[1496]: time="2025-09-12T17:15:26.214252506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:15:26.240505 systemd[1]: Started cri-containerd-dc6c8fc175d9be7552a9e390fb93fcbca9e09be5df027719e64b8769f26048be.scope - libcontainer container dc6c8fc175d9be7552a9e390fb93fcbca9e09be5df027719e64b8769f26048be. Sep 12 17:15:26.278598 systemd[1]: Started cri-containerd-53a23f1247b408b7ab88eaf60a3a86ce9c5c9ad79e1b140d1fe90a5b91544f61.scope - libcontainer container 53a23f1247b408b7ab88eaf60a3a86ce9c5c9ad79e1b140d1fe90a5b91544f61. Sep 12 17:15:26.332579 containerd[1496]: time="2025-09-12T17:15:26.332518937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7wsgr,Uid:452b5b14-8820-4668-9ed9-1183d6ef620c,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc6c8fc175d9be7552a9e390fb93fcbca9e09be5df027719e64b8769f26048be\"" Sep 12 17:15:26.365473 containerd[1496]: time="2025-09-12T17:15:26.365402730Z" level=info msg="CreateContainer within sandbox \"dc6c8fc175d9be7552a9e390fb93fcbca9e09be5df027719e64b8769f26048be\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:15:26.377783 containerd[1496]: time="2025-09-12T17:15:26.377505573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-thtsj,Uid:7820e066-f293-439c-bccf-d565152625b8,Namespace:kube-system,Attempt:0,} returns sandbox id \"53a23f1247b408b7ab88eaf60a3a86ce9c5c9ad79e1b140d1fe90a5b91544f61\"" Sep 12 17:15:26.392794 containerd[1496]: time="2025-09-12T17:15:26.392309884Z" level=info msg="CreateContainer within sandbox \"53a23f1247b408b7ab88eaf60a3a86ce9c5c9ad79e1b140d1fe90a5b91544f61\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:15:26.414478 containerd[1496]: time="2025-09-12T17:15:26.413435823Z" level=info msg="CreateContainer within sandbox \"dc6c8fc175d9be7552a9e390fb93fcbca9e09be5df027719e64b8769f26048be\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8ec52d68ff8daac4a3041bf3dff8949747f37b095a0c019786a5bc555fbbf884\"" Sep 12 17:15:26.415483 containerd[1496]: time="2025-09-12T17:15:26.415332152Z" level=info msg="StartContainer for \"8ec52d68ff8daac4a3041bf3dff8949747f37b095a0c019786a5bc555fbbf884\"" Sep 12 17:15:26.439455 containerd[1496]: time="2025-09-12T17:15:26.438535357Z" level=info msg="CreateContainer within sandbox \"53a23f1247b408b7ab88eaf60a3a86ce9c5c9ad79e1b140d1fe90a5b91544f61\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3751dc0fc1e6826f7738aa6d9b8201568bae159c382797cb5ffdd5ccb905361b\"" Sep 12 17:15:26.444389 containerd[1496]: time="2025-09-12T17:15:26.444325415Z" level=info msg="StartContainer for \"3751dc0fc1e6826f7738aa6d9b8201568bae159c382797cb5ffdd5ccb905361b\"" Sep 12 17:15:26.483599 systemd[1]: Started cri-containerd-8ec52d68ff8daac4a3041bf3dff8949747f37b095a0c019786a5bc555fbbf884.scope - libcontainer container 8ec52d68ff8daac4a3041bf3dff8949747f37b095a0c019786a5bc555fbbf884. Sep 12 17:15:26.494731 systemd[1]: Started cri-containerd-3751dc0fc1e6826f7738aa6d9b8201568bae159c382797cb5ffdd5ccb905361b.scope - libcontainer container 3751dc0fc1e6826f7738aa6d9b8201568bae159c382797cb5ffdd5ccb905361b. Sep 12 17:15:26.548814 containerd[1496]: time="2025-09-12T17:15:26.547716633Z" level=info msg="StartContainer for \"8ec52d68ff8daac4a3041bf3dff8949747f37b095a0c019786a5bc555fbbf884\" returns successfully" Sep 12 17:15:26.559917 containerd[1496]: time="2025-09-12T17:15:26.559763699Z" level=info msg="StartContainer for \"3751dc0fc1e6826f7738aa6d9b8201568bae159c382797cb5ffdd5ccb905361b\" returns successfully" Sep 12 17:15:27.616833 kubelet[2668]: I0912 17:15:27.616627 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-thtsj" podStartSLOduration=20.616563501999998 podStartE2EDuration="20.616563502s" podCreationTimestamp="2025-09-12 17:15:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:15:27.575920557 +0000 UTC m=+24.466959157" watchObservedRunningTime="2025-09-12 17:15:27.616563502 +0000 UTC m=+24.507602142" Sep 12 17:15:27.645169 kubelet[2668]: I0912 17:15:27.645027 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-7wsgr" podStartSLOduration=20.644991328 podStartE2EDuration="20.644991328s" podCreationTimestamp="2025-09-12 17:15:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:15:27.640039125 +0000 UTC m=+24.531077765" watchObservedRunningTime="2025-09-12 17:15:27.644991328 +0000 UTC m=+24.536029968" Sep 12 17:16:46.587646 systemd[1]: Started sshd@8-168.119.179.98:22-139.178.68.195:54144.service - OpenSSH per-connection server daemon (139.178.68.195:54144). Sep 12 17:16:47.577865 sshd[4081]: Accepted publickey for core from 139.178.68.195 port 54144 ssh2: RSA SHA256:4I4ir6DTNicv1nR1BCNJAvLWYZ+QnMBQBVoKHg57aHc Sep 12 17:16:47.580789 sshd-session[4081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:16:47.587662 systemd-logind[1477]: New session 8 of user core. Sep 12 17:16:47.595432 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:16:48.362510 sshd[4083]: Connection closed by 139.178.68.195 port 54144 Sep 12 17:16:48.363408 sshd-session[4081]: pam_unix(sshd:session): session closed for user core Sep 12 17:16:48.368515 systemd[1]: sshd@8-168.119.179.98:22-139.178.68.195:54144.service: Deactivated successfully. Sep 12 17:16:48.371710 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:16:48.374258 systemd-logind[1477]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:16:48.375843 systemd-logind[1477]: Removed session 8. Sep 12 17:16:53.548465 systemd[1]: Started sshd@9-168.119.179.98:22-139.178.68.195:35618.service - OpenSSH per-connection server daemon (139.178.68.195:35618). Sep 12 17:16:54.550981 sshd[4096]: Accepted publickey for core from 139.178.68.195 port 35618 ssh2: RSA SHA256:4I4ir6DTNicv1nR1BCNJAvLWYZ+QnMBQBVoKHg57aHc Sep 12 17:16:54.553386 sshd-session[4096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:16:54.558888 systemd-logind[1477]: New session 9 of user core. Sep 12 17:16:54.565244 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:16:55.327160 sshd[4098]: Connection closed by 139.178.68.195 port 35618 Sep 12 17:16:55.328057 sshd-session[4096]: pam_unix(sshd:session): session closed for user core Sep 12 17:16:55.334651 systemd[1]: sshd@9-168.119.179.98:22-139.178.68.195:35618.service: Deactivated successfully. Sep 12 17:16:55.337945 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:16:55.340356 systemd-logind[1477]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:16:55.341676 systemd-logind[1477]: Removed session 9. Sep 12 17:17:00.516718 systemd[1]: Started sshd@10-168.119.179.98:22-139.178.68.195:49458.service - OpenSSH per-connection server daemon (139.178.68.195:49458). Sep 12 17:17:01.570055 sshd[4110]: Accepted publickey for core from 139.178.68.195 port 49458 ssh2: RSA SHA256:4I4ir6DTNicv1nR1BCNJAvLWYZ+QnMBQBVoKHg57aHc Sep 12 17:17:01.572472 sshd-session[4110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:17:01.579722 systemd-logind[1477]: New session 10 of user core. Sep 12 17:17:01.586515 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:17:02.372013 sshd[4112]: Connection closed by 139.178.68.195 port 49458 Sep 12 17:17:02.372805 sshd-session[4110]: pam_unix(sshd:session): session closed for user core Sep 12 17:17:02.378026 systemd[1]: sshd@10-168.119.179.98:22-139.178.68.195:49458.service: Deactivated successfully. Sep 12 17:17:02.381119 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:17:02.382412 systemd-logind[1477]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:17:02.384033 systemd-logind[1477]: Removed session 10. Sep 12 17:17:02.555335 systemd[1]: Started sshd@11-168.119.179.98:22-139.178.68.195:49468.service - OpenSSH per-connection server daemon (139.178.68.195:49468). Sep 12 17:17:03.546279 sshd[4124]: Accepted publickey for core from 139.178.68.195 port 49468 ssh2: RSA SHA256:4I4ir6DTNicv1nR1BCNJAvLWYZ+QnMBQBVoKHg57aHc Sep 12 17:17:03.548726 sshd-session[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:17:03.553796 systemd-logind[1477]: New session 11 of user core. Sep 12 17:17:03.563556 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:17:04.347387 sshd[4128]: Connection closed by 139.178.68.195 port 49468 Sep 12 17:17:04.348006 sshd-session[4124]: pam_unix(sshd:session): session closed for user core Sep 12 17:17:04.353136 systemd[1]: sshd@11-168.119.179.98:22-139.178.68.195:49468.service: Deactivated successfully. Sep 12 17:17:04.355516 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:17:04.357016 systemd-logind[1477]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:17:04.358144 systemd-logind[1477]: Removed session 11. Sep 12 17:17:04.523730 systemd[1]: Started sshd@12-168.119.179.98:22-139.178.68.195:49470.service - OpenSSH per-connection server daemon (139.178.68.195:49470). Sep 12 17:17:05.509670 sshd[4138]: Accepted publickey for core from 139.178.68.195 port 49470 ssh2: RSA SHA256:4I4ir6DTNicv1nR1BCNJAvLWYZ+QnMBQBVoKHg57aHc Sep 12 17:17:05.511650 sshd-session[4138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:17:05.519574 systemd-logind[1477]: New session 12 of user core. Sep 12 17:17:05.527433 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:17:06.270237 sshd[4140]: Connection closed by 139.178.68.195 port 49470 Sep 12 17:17:06.271060 sshd-session[4138]: pam_unix(sshd:session): session closed for user core Sep 12 17:17:06.276074 systemd[1]: sshd@12-168.119.179.98:22-139.178.68.195:49470.service: Deactivated successfully. Sep 12 17:17:06.276186 systemd-logind[1477]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:17:06.278400 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:17:06.280242 systemd-logind[1477]: Removed session 12. Sep 12 17:17:11.451756 systemd[1]: Started sshd@13-168.119.179.98:22-139.178.68.195:35838.service - OpenSSH per-connection server daemon (139.178.68.195:35838). Sep 12 17:17:12.451851 sshd[4154]: Accepted publickey for core from 139.178.68.195 port 35838 ssh2: RSA SHA256:4I4ir6DTNicv1nR1BCNJAvLWYZ+QnMBQBVoKHg57aHc Sep 12 17:17:12.453915 sshd-session[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:17:12.460318 systemd-logind[1477]: New session 13 of user core. Sep 12 17:17:12.468560 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:17:13.220268 sshd[4156]: Connection closed by 139.178.68.195 port 35838 Sep 12 17:17:13.221254 sshd-session[4154]: pam_unix(sshd:session): session closed for user core Sep 12 17:17:13.226810 systemd-logind[1477]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:17:13.227485 systemd[1]: sshd@13-168.119.179.98:22-139.178.68.195:35838.service: Deactivated successfully. Sep 12 17:17:13.231401 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:17:13.234961 systemd-logind[1477]: Removed session 13. Sep 12 17:17:13.392638 systemd[1]: Started sshd@14-168.119.179.98:22-139.178.68.195:35854.service - OpenSSH per-connection server daemon (139.178.68.195:35854). Sep 12 17:17:14.376252 sshd[4168]: Accepted publickey for core from 139.178.68.195 port 35854 ssh2: RSA SHA256:4I4ir6DTNicv1nR1BCNJAvLWYZ+QnMBQBVoKHg57aHc Sep 12 17:17:14.378226 sshd-session[4168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:17:14.384141 systemd-logind[1477]: New session 14 of user core. Sep 12 17:17:14.389393 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:17:15.172695 sshd[4170]: Connection closed by 139.178.68.195 port 35854 Sep 12 17:17:15.173868 sshd-session[4168]: pam_unix(sshd:session): session closed for user core Sep 12 17:17:15.179910 systemd[1]: sshd@14-168.119.179.98:22-139.178.68.195:35854.service: Deactivated successfully. Sep 12 17:17:15.182297 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:17:15.183308 systemd-logind[1477]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:17:15.184825 systemd-logind[1477]: Removed session 14. Sep 12 17:17:15.351030 systemd[1]: Started sshd@15-168.119.179.98:22-139.178.68.195:35856.service - OpenSSH per-connection server daemon (139.178.68.195:35856). Sep 12 17:17:16.334523 sshd[4180]: Accepted publickey for core from 139.178.68.195 port 35856 ssh2: RSA SHA256:4I4ir6DTNicv1nR1BCNJAvLWYZ+QnMBQBVoKHg57aHc Sep 12 17:17:16.336143 sshd-session[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:17:16.342435 systemd-logind[1477]: New session 15 of user core. Sep 12 17:17:16.348458 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:17:17.689471 sshd[4182]: Connection closed by 139.178.68.195 port 35856 Sep 12 17:17:17.690086 sshd-session[4180]: pam_unix(sshd:session): session closed for user core Sep 12 17:17:17.696401 systemd[1]: sshd@15-168.119.179.98:22-139.178.68.195:35856.service: Deactivated successfully. Sep 12 17:17:17.699747 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:17:17.700748 systemd-logind[1477]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:17:17.702658 systemd-logind[1477]: Removed session 15. Sep 12 17:17:17.869927 systemd[1]: Started sshd@16-168.119.179.98:22-139.178.68.195:35872.service - OpenSSH per-connection server daemon (139.178.68.195:35872). Sep 12 17:17:18.853527 sshd[4199]: Accepted publickey for core from 139.178.68.195 port 35872 ssh2: RSA SHA256:4I4ir6DTNicv1nR1BCNJAvLWYZ+QnMBQBVoKHg57aHc Sep 12 17:17:18.855620 sshd-session[4199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:17:18.862491 systemd-logind[1477]: New session 16 of user core. Sep 12 17:17:18.865523 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:17:19.726832 sshd[4201]: Connection closed by 139.178.68.195 port 35872 Sep 12 17:17:19.728474 sshd-session[4199]: pam_unix(sshd:session): session closed for user core Sep 12 17:17:19.737185 systemd[1]: sshd@16-168.119.179.98:22-139.178.68.195:35872.service: Deactivated successfully. Sep 12 17:17:19.741842 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:17:19.744592 systemd-logind[1477]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:17:19.747400 systemd-logind[1477]: Removed session 16. Sep 12 17:17:19.907580 systemd[1]: Started sshd@17-168.119.179.98:22-139.178.68.195:35882.service - OpenSSH per-connection server daemon (139.178.68.195:35882). Sep 12 17:17:20.898396 sshd[4211]: Accepted publickey for core from 139.178.68.195 port 35882 ssh2: RSA SHA256:4I4ir6DTNicv1nR1BCNJAvLWYZ+QnMBQBVoKHg57aHc Sep 12 17:17:20.900835 sshd-session[4211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:17:20.905466 systemd-logind[1477]: New session 17 of user core. Sep 12 17:17:20.910410 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:17:21.655081 sshd[4213]: Connection closed by 139.178.68.195 port 35882 Sep 12 17:17:21.656043 sshd-session[4211]: pam_unix(sshd:session): session closed for user core Sep 12 17:17:21.661341 systemd-logind[1477]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:17:21.662662 systemd[1]: sshd@17-168.119.179.98:22-139.178.68.195:35882.service: Deactivated successfully. Sep 12 17:17:21.665237 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:17:21.666891 systemd-logind[1477]: Removed session 17. Sep 12 17:17:26.836503 systemd[1]: Started sshd@18-168.119.179.98:22-139.178.68.195:53256.service - OpenSSH per-connection server daemon (139.178.68.195:53256). Sep 12 17:17:27.837860 sshd[4227]: Accepted publickey for core from 139.178.68.195 port 53256 ssh2: RSA SHA256:4I4ir6DTNicv1nR1BCNJAvLWYZ+QnMBQBVoKHg57aHc Sep 12 17:17:27.840128 sshd-session[4227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:17:27.846414 systemd-logind[1477]: New session 18 of user core. Sep 12 17:17:27.853680 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:17:28.598542 sshd[4229]: Connection closed by 139.178.68.195 port 53256 Sep 12 17:17:28.598414 sshd-session[4227]: pam_unix(sshd:session): session closed for user core Sep 12 17:17:28.604787 systemd-logind[1477]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:17:28.606016 systemd[1]: sshd@18-168.119.179.98:22-139.178.68.195:53256.service: Deactivated successfully. Sep 12 17:17:28.609787 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:17:28.611731 systemd-logind[1477]: Removed session 18. Sep 12 17:17:33.780558 systemd[1]: Started sshd@19-168.119.179.98:22-139.178.68.195:46482.service - OpenSSH per-connection server daemon (139.178.68.195:46482). Sep 12 17:17:34.783494 sshd[4241]: Accepted publickey for core from 139.178.68.195 port 46482 ssh2: RSA SHA256:4I4ir6DTNicv1nR1BCNJAvLWYZ+QnMBQBVoKHg57aHc Sep 12 17:17:34.785896 sshd-session[4241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:17:34.791480 systemd-logind[1477]: New session 19 of user core. Sep 12 17:17:34.794376 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:17:35.546618 sshd[4243]: Connection closed by 139.178.68.195 port 46482 Sep 12 17:17:35.547582 sshd-session[4241]: pam_unix(sshd:session): session closed for user core Sep 12 17:17:35.553239 systemd[1]: sshd@19-168.119.179.98:22-139.178.68.195:46482.service: Deactivated successfully. Sep 12 17:17:35.556183 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:17:35.557579 systemd-logind[1477]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:17:35.559836 systemd-logind[1477]: Removed session 19. Sep 12 17:17:35.724564 systemd[1]: Started sshd@20-168.119.179.98:22-139.178.68.195:46490.service - OpenSSH per-connection server daemon (139.178.68.195:46490). Sep 12 17:17:36.715159 sshd[4255]: Accepted publickey for core from 139.178.68.195 port 46490 ssh2: RSA SHA256:4I4ir6DTNicv1nR1BCNJAvLWYZ+QnMBQBVoKHg57aHc Sep 12 17:17:36.717274 sshd-session[4255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:17:36.723012 systemd-logind[1477]: New session 20 of user core. Sep 12 17:17:36.729596 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:17:39.350729 containerd[1496]: time="2025-09-12T17:17:39.350682023Z" level=info msg="StopContainer for \"a444c7f034b0f1a15ccf1dae6ec5702c388917d49ab07a8d7b5d80156d235c37\" with timeout 30 (s)" Sep 12 17:17:39.354753 containerd[1496]: time="2025-09-12T17:17:39.352670482Z" level=info msg="Stop container \"a444c7f034b0f1a15ccf1dae6ec5702c388917d49ab07a8d7b5d80156d235c37\" with signal terminated" Sep 12 17:17:39.375405 systemd[1]: cri-containerd-a444c7f034b0f1a15ccf1dae6ec5702c388917d49ab07a8d7b5d80156d235c37.scope: Deactivated successfully. Sep 12 17:17:39.376627 containerd[1496]: time="2025-09-12T17:17:39.376540550Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:17:39.386467 containerd[1496]: time="2025-09-12T17:17:39.386426400Z" level=info msg="StopContainer for \"8eac21fa95e339904c2665a02e1cc46f0640e80bf8304f1af0caa38e98962a9c\" with timeout 2 (s)" Sep 12 17:17:39.386962 containerd[1496]: time="2025-09-12T17:17:39.386936676Z" level=info msg="Stop container \"8eac21fa95e339904c2665a02e1cc46f0640e80bf8304f1af0caa38e98962a9c\" with signal terminated" Sep 12 17:17:39.393889 systemd-networkd[1395]: lxc_health: Link DOWN Sep 12 17:17:39.393896 systemd-networkd[1395]: lxc_health: Lost carrier Sep 12 17:17:39.410980 systemd[1]: cri-containerd-8eac21fa95e339904c2665a02e1cc46f0640e80bf8304f1af0caa38e98962a9c.scope: Deactivated successfully. Sep 12 17:17:39.411700 systemd[1]: cri-containerd-8eac21fa95e339904c2665a02e1cc46f0640e80bf8304f1af0caa38e98962a9c.scope: Consumed 8.657s CPU time, 123.9M memory peak, 128K read from disk, 12.9M written to disk. Sep 12 17:17:39.421491 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a444c7f034b0f1a15ccf1dae6ec5702c388917d49ab07a8d7b5d80156d235c37-rootfs.mount: Deactivated successfully. Sep 12 17:17:39.437848 containerd[1496]: time="2025-09-12T17:17:39.437666701Z" level=info msg="shim disconnected" id=a444c7f034b0f1a15ccf1dae6ec5702c388917d49ab07a8d7b5d80156d235c37 namespace=k8s.io Sep 12 17:17:39.437848 containerd[1496]: time="2025-09-12T17:17:39.437734505Z" level=warning msg="cleaning up after shim disconnected" id=a444c7f034b0f1a15ccf1dae6ec5702c388917d49ab07a8d7b5d80156d235c37 namespace=k8s.io Sep 12 17:17:39.437848 containerd[1496]: time="2025-09-12T17:17:39.437742146Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:17:39.441773 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8eac21fa95e339904c2665a02e1cc46f0640e80bf8304f1af0caa38e98962a9c-rootfs.mount: Deactivated successfully. Sep 12 17:17:39.449380 containerd[1496]: time="2025-09-12T17:17:39.449302354Z" level=info msg="shim disconnected" id=8eac21fa95e339904c2665a02e1cc46f0640e80bf8304f1af0caa38e98962a9c namespace=k8s.io Sep 12 17:17:39.449380 containerd[1496]: time="2025-09-12T17:17:39.449368158Z" level=warning msg="cleaning up after shim disconnected" id=8eac21fa95e339904c2665a02e1cc46f0640e80bf8304f1af0caa38e98962a9c namespace=k8s.io Sep 12 17:17:39.449380 containerd[1496]: time="2025-09-12T17:17:39.449376039Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:17:39.455251 containerd[1496]: time="2025-09-12T17:17:39.454431872Z" level=warning msg="cleanup warnings time=\"2025-09-12T17:17:39Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 12 17:17:39.461243 containerd[1496]: time="2025-09-12T17:17:39.461162102Z" level=info msg="StopContainer for \"a444c7f034b0f1a15ccf1dae6ec5702c388917d49ab07a8d7b5d80156d235c37\" returns successfully" Sep 12 17:17:39.462251 containerd[1496]: time="2025-09-12T17:17:39.462198495Z" level=info msg="StopPodSandbox for \"d7431605835e3ddbe30319e38c959ddb6783e277c6916cdbda2902d46321cd2b\"" Sep 12 17:17:39.462343 containerd[1496]: time="2025-09-12T17:17:39.462269340Z" level=info msg="Container to stop \"a444c7f034b0f1a15ccf1dae6ec5702c388917d49ab07a8d7b5d80156d235c37\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:17:39.465448 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d7431605835e3ddbe30319e38c959ddb6783e277c6916cdbda2902d46321cd2b-shm.mount: Deactivated successfully. Sep 12 17:17:39.474729 containerd[1496]: time="2025-09-12T17:17:39.474690208Z" level=info msg="StopContainer for \"8eac21fa95e339904c2665a02e1cc46f0640e80bf8304f1af0caa38e98962a9c\" returns successfully" Sep 12 17:17:39.475679 containerd[1496]: time="2025-09-12T17:17:39.475517866Z" level=info msg="StopPodSandbox for \"faaef948cc03b13ea66467decdcbc8989ffe1433124cc12acdb62a6cdfb7312e\"" Sep 12 17:17:39.475679 containerd[1496]: time="2025-09-12T17:17:39.475561749Z" level=info msg="Container to stop \"c5bc51e8b1f2b94d707ddcd35b9f43d2b7eed8b605c4deb5528908dff126293c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:17:39.475679 containerd[1496]: time="2025-09-12T17:17:39.475648395Z" level=info msg="Container to stop \"66000d8fecd9877d4ec6e25d3acd7d2df35d0297424a9d7237dd79255e6cc4f5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:17:39.475679 containerd[1496]: time="2025-09-12T17:17:39.475659715Z" level=info msg="Container to stop \"8eac21fa95e339904c2665a02e1cc46f0640e80bf8304f1af0caa38e98962a9c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:17:39.476001 containerd[1496]: time="2025-09-12T17:17:39.475844808Z" level=info msg="Container to stop \"ba4cae6079b9167c3ad185f817f6e57aa03551086ae1c465c75116103cf7127f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:17:39.476001 containerd[1496]: time="2025-09-12T17:17:39.475869010Z" level=info msg="Container to stop \"7a31710100fc667ef58a2366276c95ed81740c38afdd0a10d5e72b70d87fd924\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:17:39.480719 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-faaef948cc03b13ea66467decdcbc8989ffe1433124cc12acdb62a6cdfb7312e-shm.mount: Deactivated successfully. Sep 12 17:17:39.484715 systemd[1]: cri-containerd-d7431605835e3ddbe30319e38c959ddb6783e277c6916cdbda2902d46321cd2b.scope: Deactivated successfully. Sep 12 17:17:39.498079 systemd[1]: cri-containerd-faaef948cc03b13ea66467decdcbc8989ffe1433124cc12acdb62a6cdfb7312e.scope: Deactivated successfully. Sep 12 17:17:39.527913 containerd[1496]: time="2025-09-12T17:17:39.527840881Z" level=info msg="shim disconnected" id=faaef948cc03b13ea66467decdcbc8989ffe1433124cc12acdb62a6cdfb7312e namespace=k8s.io Sep 12 17:17:39.527913 containerd[1496]: time="2025-09-12T17:17:39.527904966Z" level=warning msg="cleaning up after shim disconnected" id=faaef948cc03b13ea66467decdcbc8989ffe1433124cc12acdb62a6cdfb7312e namespace=k8s.io Sep 12 17:17:39.527913 containerd[1496]: time="2025-09-12T17:17:39.527913407Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:17:39.529812 containerd[1496]: time="2025-09-12T17:17:39.529763656Z" level=info msg="shim disconnected" id=d7431605835e3ddbe30319e38c959ddb6783e277c6916cdbda2902d46321cd2b namespace=k8s.io Sep 12 17:17:39.529924 containerd[1496]: time="2025-09-12T17:17:39.529906346Z" level=warning msg="cleaning up after shim disconnected" id=d7431605835e3ddbe30319e38c959ddb6783e277c6916cdbda2902d46321cd2b namespace=k8s.io Sep 12 17:17:39.530076 containerd[1496]: time="2025-09-12T17:17:39.530059116Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:17:39.547326 containerd[1496]: time="2025-09-12T17:17:39.547273399Z" level=warning msg="cleanup warnings time=\"2025-09-12T17:17:39Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 12 17:17:39.547539 containerd[1496]: time="2025-09-12T17:17:39.547499495Z" level=info msg="TearDown network for sandbox \"faaef948cc03b13ea66467decdcbc8989ffe1433124cc12acdb62a6cdfb7312e\" successfully" Sep 12 17:17:39.547539 containerd[1496]: time="2025-09-12T17:17:39.547526177Z" level=info msg="StopPodSandbox for \"faaef948cc03b13ea66467decdcbc8989ffe1433124cc12acdb62a6cdfb7312e\" returns successfully" Sep 12 17:17:39.548632 containerd[1496]: time="2025-09-12T17:17:39.548587851Z" level=info msg="TearDown network for sandbox \"d7431605835e3ddbe30319e38c959ddb6783e277c6916cdbda2902d46321cd2b\" successfully" Sep 12 17:17:39.548921 containerd[1496]: time="2025-09-12T17:17:39.548900473Z" level=info msg="StopPodSandbox for \"d7431605835e3ddbe30319e38c959ddb6783e277c6916cdbda2902d46321cd2b\" returns successfully" Sep 12 17:17:39.672268 kubelet[2668]: I0912 17:17:39.671676 2668 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/84c2cf6f-71dc-49a6-8f00-978ddfb08898-cni-path\") pod \"84c2cf6f-71dc-49a6-8f00-978ddfb08898\" (UID: \"84c2cf6f-71dc-49a6-8f00-978ddfb08898\") " Sep 12 17:17:39.672268 kubelet[2668]: I0912 17:17:39.671799 2668 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/84c2cf6f-71dc-49a6-8f00-978ddfb08898-lib-modules\") pod \"84c2cf6f-71dc-49a6-8f00-978ddfb08898\" (UID: \"84c2cf6f-71dc-49a6-8f00-978ddfb08898\") " Sep 12 17:17:39.672268 kubelet[2668]: I0912 17:17:39.671803 2668 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84c2cf6f-71dc-49a6-8f00-978ddfb08898-cni-path" (OuterVolumeSpecName: "cni-path") pod "84c2cf6f-71dc-49a6-8f00-978ddfb08898" (UID: "84c2cf6f-71dc-49a6-8f00-978ddfb08898"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:17:39.672268 kubelet[2668]: I0912 17:17:39.671847 2668 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/84c2cf6f-71dc-49a6-8f00-978ddfb08898-host-proc-sys-net\") pod \"84c2cf6f-71dc-49a6-8f00-978ddfb08898\" (UID: \"84c2cf6f-71dc-49a6-8f00-978ddfb08898\") " Sep 12 17:17:39.672268 kubelet[2668]: I0912 17:17:39.671876 2668 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84c2cf6f-71dc-49a6-8f00-978ddfb08898-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "84c2cf6f-71dc-49a6-8f00-978ddfb08898" (UID: "84c2cf6f-71dc-49a6-8f00-978ddfb08898"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:17:39.673289 kubelet[2668]: I0912 17:17:39.671900 2668 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84c2cf6f-71dc-49a6-8f00-978ddfb08898-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "84c2cf6f-71dc-49a6-8f00-978ddfb08898" (UID: "84c2cf6f-71dc-49a6-8f00-978ddfb08898"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:17:39.673289 kubelet[2668]: I0912 17:17:39.671899 2668 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4491bb11-dbb1-464d-955a-f7ca7d7c4aab-cilium-config-path\") pod \"4491bb11-dbb1-464d-955a-f7ca7d7c4aab\" (UID: \"4491bb11-dbb1-464d-955a-f7ca7d7c4aab\") " Sep 12 17:17:39.673289 kubelet[2668]: I0912 17:17:39.671946 2668 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/84c2cf6f-71dc-49a6-8f00-978ddfb08898-cilium-run\") pod \"84c2cf6f-71dc-49a6-8f00-978ddfb08898\" (UID: \"84c2cf6f-71dc-49a6-8f00-978ddfb08898\") " Sep 12 17:17:39.673289 kubelet[2668]: I0912 17:17:39.671981 2668 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4vfs\" (UniqueName: \"kubernetes.io/projected/4491bb11-dbb1-464d-955a-f7ca7d7c4aab-kube-api-access-q4vfs\") pod \"4491bb11-dbb1-464d-955a-f7ca7d7c4aab\" (UID: \"4491bb11-dbb1-464d-955a-f7ca7d7c4aab\") " Sep 12 17:17:39.673289 kubelet[2668]: I0912 17:17:39.672004 2668 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/84c2cf6f-71dc-49a6-8f00-978ddfb08898-hostproc\") pod \"84c2cf6f-71dc-49a6-8f00-978ddfb08898\" (UID: \"84c2cf6f-71dc-49a6-8f00-978ddfb08898\") " Sep 12 17:17:39.673289 kubelet[2668]: I0912 17:17:39.672024 2668 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/84c2cf6f-71dc-49a6-8f00-978ddfb08898-xtables-lock\") pod \"84c2cf6f-71dc-49a6-8f00-978ddfb08898\" (UID: \"84c2cf6f-71dc-49a6-8f00-978ddfb08898\") " Sep 12 17:17:39.673651 kubelet[2668]: I0912 17:17:39.672082 2668 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/84c2cf6f-71dc-49a6-8f00-978ddfb08898-hubble-tls\") pod \"84c2cf6f-71dc-49a6-8f00-978ddfb08898\" (UID: \"84c2cf6f-71dc-49a6-8f00-978ddfb08898\") " Sep 12 17:17:39.673651 kubelet[2668]: I0912 17:17:39.672109 2668 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/84c2cf6f-71dc-49a6-8f00-978ddfb08898-host-proc-sys-kernel\") pod \"84c2cf6f-71dc-49a6-8f00-978ddfb08898\" (UID: \"84c2cf6f-71dc-49a6-8f00-978ddfb08898\") " Sep 12 17:17:39.673651 kubelet[2668]: I0912 17:17:39.672134 2668 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8bt2\" (UniqueName: \"kubernetes.io/projected/84c2cf6f-71dc-49a6-8f00-978ddfb08898-kube-api-access-r8bt2\") pod \"84c2cf6f-71dc-49a6-8f00-978ddfb08898\" (UID: \"84c2cf6f-71dc-49a6-8f00-978ddfb08898\") " Sep 12 17:17:39.673651 kubelet[2668]: I0912 17:17:39.672158 2668 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/84c2cf6f-71dc-49a6-8f00-978ddfb08898-etc-cni-netd\") pod \"84c2cf6f-71dc-49a6-8f00-978ddfb08898\" (UID: \"84c2cf6f-71dc-49a6-8f00-978ddfb08898\") " Sep 12 17:17:39.673651 kubelet[2668]: I0912 17:17:39.672182 2668 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/84c2cf6f-71dc-49a6-8f00-978ddfb08898-clustermesh-secrets\") pod \"84c2cf6f-71dc-49a6-8f00-978ddfb08898\" (UID: \"84c2cf6f-71dc-49a6-8f00-978ddfb08898\") " Sep 12 17:17:39.673651 kubelet[2668]: I0912 17:17:39.672253 2668 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/84c2cf6f-71dc-49a6-8f00-978ddfb08898-cilium-config-path\") pod \"84c2cf6f-71dc-49a6-8f00-978ddfb08898\" (UID: \"84c2cf6f-71dc-49a6-8f00-978ddfb08898\") " Sep 12 17:17:39.673986 kubelet[2668]: I0912 17:17:39.672283 2668 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/84c2cf6f-71dc-49a6-8f00-978ddfb08898-bpf-maps\") pod \"84c2cf6f-71dc-49a6-8f00-978ddfb08898\" (UID: \"84c2cf6f-71dc-49a6-8f00-978ddfb08898\") " Sep 12 17:17:39.673986 kubelet[2668]: I0912 17:17:39.672306 2668 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/84c2cf6f-71dc-49a6-8f00-978ddfb08898-cilium-cgroup\") pod \"84c2cf6f-71dc-49a6-8f00-978ddfb08898\" (UID: \"84c2cf6f-71dc-49a6-8f00-978ddfb08898\") " Sep 12 17:17:39.673986 kubelet[2668]: I0912 17:17:39.672359 2668 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/84c2cf6f-71dc-49a6-8f00-978ddfb08898-cni-path\") on node \"ci-4230-2-3-6-9297726d8a\" DevicePath \"\"" Sep 12 17:17:39.673986 kubelet[2668]: I0912 17:17:39.672374 2668 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/84c2cf6f-71dc-49a6-8f00-978ddfb08898-lib-modules\") on node \"ci-4230-2-3-6-9297726d8a\" DevicePath \"\"" Sep 12 17:17:39.673986 kubelet[2668]: I0912 17:17:39.672389 2668 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/84c2cf6f-71dc-49a6-8f00-978ddfb08898-host-proc-sys-net\") on node \"ci-4230-2-3-6-9297726d8a\" DevicePath \"\"" Sep 12 17:17:39.673986 kubelet[2668]: I0912 17:17:39.672424 2668 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84c2cf6f-71dc-49a6-8f00-978ddfb08898-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "84c2cf6f-71dc-49a6-8f00-978ddfb08898" (UID: "84c2cf6f-71dc-49a6-8f00-978ddfb08898"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:17:39.676701 kubelet[2668]: I0912 17:17:39.672450 2668 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84c2cf6f-71dc-49a6-8f00-978ddfb08898-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "84c2cf6f-71dc-49a6-8f00-978ddfb08898" (UID: "84c2cf6f-71dc-49a6-8f00-978ddfb08898"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:17:39.676701 kubelet[2668]: I0912 17:17:39.674524 2668 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84c2cf6f-71dc-49a6-8f00-978ddfb08898-hostproc" (OuterVolumeSpecName: "hostproc") pod "84c2cf6f-71dc-49a6-8f00-978ddfb08898" (UID: "84c2cf6f-71dc-49a6-8f00-978ddfb08898"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:17:39.676701 kubelet[2668]: I0912 17:17:39.674590 2668 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84c2cf6f-71dc-49a6-8f00-978ddfb08898-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "84c2cf6f-71dc-49a6-8f00-978ddfb08898" (UID: "84c2cf6f-71dc-49a6-8f00-978ddfb08898"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:17:39.676701 kubelet[2668]: I0912 17:17:39.675311 2668 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84c2cf6f-71dc-49a6-8f00-978ddfb08898-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "84c2cf6f-71dc-49a6-8f00-978ddfb08898" (UID: "84c2cf6f-71dc-49a6-8f00-978ddfb08898"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:17:39.677362 kubelet[2668]: I0912 17:17:39.677320 2668 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84c2cf6f-71dc-49a6-8f00-978ddfb08898-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "84c2cf6f-71dc-49a6-8f00-978ddfb08898" (UID: "84c2cf6f-71dc-49a6-8f00-978ddfb08898"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:17:39.678432 kubelet[2668]: I0912 17:17:39.678395 2668 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84c2cf6f-71dc-49a6-8f00-978ddfb08898-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "84c2cf6f-71dc-49a6-8f00-978ddfb08898" (UID: "84c2cf6f-71dc-49a6-8f00-978ddfb08898"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:17:39.682797 kubelet[2668]: I0912 17:17:39.682762 2668 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4491bb11-dbb1-464d-955a-f7ca7d7c4aab-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4491bb11-dbb1-464d-955a-f7ca7d7c4aab" (UID: "4491bb11-dbb1-464d-955a-f7ca7d7c4aab"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 17:17:39.683005 kubelet[2668]: I0912 17:17:39.682984 2668 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4491bb11-dbb1-464d-955a-f7ca7d7c4aab-kube-api-access-q4vfs" (OuterVolumeSpecName: "kube-api-access-q4vfs") pod "4491bb11-dbb1-464d-955a-f7ca7d7c4aab" (UID: "4491bb11-dbb1-464d-955a-f7ca7d7c4aab"). InnerVolumeSpecName "kube-api-access-q4vfs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:17:39.683615 kubelet[2668]: I0912 17:17:39.683580 2668 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84c2cf6f-71dc-49a6-8f00-978ddfb08898-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "84c2cf6f-71dc-49a6-8f00-978ddfb08898" (UID: "84c2cf6f-71dc-49a6-8f00-978ddfb08898"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 17:17:39.683931 kubelet[2668]: I0912 17:17:39.683899 2668 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84c2cf6f-71dc-49a6-8f00-978ddfb08898-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "84c2cf6f-71dc-49a6-8f00-978ddfb08898" (UID: "84c2cf6f-71dc-49a6-8f00-978ddfb08898"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:17:39.684119 kubelet[2668]: I0912 17:17:39.684095 2668 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84c2cf6f-71dc-49a6-8f00-978ddfb08898-kube-api-access-r8bt2" (OuterVolumeSpecName: "kube-api-access-r8bt2") pod "84c2cf6f-71dc-49a6-8f00-978ddfb08898" (UID: "84c2cf6f-71dc-49a6-8f00-978ddfb08898"). InnerVolumeSpecName "kube-api-access-r8bt2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:17:39.684631 kubelet[2668]: I0912 17:17:39.684600 2668 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84c2cf6f-71dc-49a6-8f00-978ddfb08898-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "84c2cf6f-71dc-49a6-8f00-978ddfb08898" (UID: "84c2cf6f-71dc-49a6-8f00-978ddfb08898"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 17:17:39.772761 kubelet[2668]: I0912 17:17:39.772693 2668 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/84c2cf6f-71dc-49a6-8f00-978ddfb08898-bpf-maps\") on node \"ci-4230-2-3-6-9297726d8a\" DevicePath \"\"" Sep 12 17:17:39.773164 kubelet[2668]: I0912 17:17:39.772776 2668 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/84c2cf6f-71dc-49a6-8f00-978ddfb08898-cilium-cgroup\") on node \"ci-4230-2-3-6-9297726d8a\" DevicePath \"\"" Sep 12 17:17:39.773164 kubelet[2668]: I0912 17:17:39.772808 2668 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4491bb11-dbb1-464d-955a-f7ca7d7c4aab-cilium-config-path\") on node \"ci-4230-2-3-6-9297726d8a\" DevicePath \"\"" Sep 12 17:17:39.773164 kubelet[2668]: I0912 17:17:39.772907 2668 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/84c2cf6f-71dc-49a6-8f00-978ddfb08898-cilium-run\") on node \"ci-4230-2-3-6-9297726d8a\" DevicePath \"\"" Sep 12 17:17:39.773164 kubelet[2668]: I0912 17:17:39.772966 2668 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4vfs\" (UniqueName: \"kubernetes.io/projected/4491bb11-dbb1-464d-955a-f7ca7d7c4aab-kube-api-access-q4vfs\") on node \"ci-4230-2-3-6-9297726d8a\" DevicePath \"\"" Sep 12 17:17:39.773164 kubelet[2668]: I0912 17:17:39.772989 2668 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/84c2cf6f-71dc-49a6-8f00-978ddfb08898-hostproc\") on node \"ci-4230-2-3-6-9297726d8a\" DevicePath \"\"" Sep 12 17:17:39.773164 kubelet[2668]: I0912 17:17:39.773011 2668 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/84c2cf6f-71dc-49a6-8f00-978ddfb08898-xtables-lock\") on node \"ci-4230-2-3-6-9297726d8a\" DevicePath \"\"" Sep 12 17:17:39.773164 kubelet[2668]: I0912 17:17:39.773031 2668 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/84c2cf6f-71dc-49a6-8f00-978ddfb08898-hubble-tls\") on node \"ci-4230-2-3-6-9297726d8a\" DevicePath \"\"" Sep 12 17:17:39.773164 kubelet[2668]: I0912 17:17:39.773107 2668 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/84c2cf6f-71dc-49a6-8f00-978ddfb08898-host-proc-sys-kernel\") on node \"ci-4230-2-3-6-9297726d8a\" DevicePath \"\"" Sep 12 17:17:39.773756 kubelet[2668]: I0912 17:17:39.773134 2668 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r8bt2\" (UniqueName: \"kubernetes.io/projected/84c2cf6f-71dc-49a6-8f00-978ddfb08898-kube-api-access-r8bt2\") on node \"ci-4230-2-3-6-9297726d8a\" DevicePath \"\"" Sep 12 17:17:39.773756 kubelet[2668]: I0912 17:17:39.773155 2668 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/84c2cf6f-71dc-49a6-8f00-978ddfb08898-etc-cni-netd\") on node \"ci-4230-2-3-6-9297726d8a\" DevicePath \"\"" Sep 12 17:17:39.773756 kubelet[2668]: I0912 17:17:39.773176 2668 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/84c2cf6f-71dc-49a6-8f00-978ddfb08898-clustermesh-secrets\") on node \"ci-4230-2-3-6-9297726d8a\" DevicePath \"\"" Sep 12 17:17:39.773756 kubelet[2668]: I0912 17:17:39.773239 2668 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/84c2cf6f-71dc-49a6-8f00-978ddfb08898-cilium-config-path\") on node \"ci-4230-2-3-6-9297726d8a\" DevicePath \"\"" Sep 12 17:17:39.902656 kubelet[2668]: I0912 17:17:39.902625 2668 scope.go:117] "RemoveContainer" containerID="a444c7f034b0f1a15ccf1dae6ec5702c388917d49ab07a8d7b5d80156d235c37" Sep 12 17:17:39.906171 containerd[1496]: time="2025-09-12T17:17:39.905563314Z" level=info msg="RemoveContainer for \"a444c7f034b0f1a15ccf1dae6ec5702c388917d49ab07a8d7b5d80156d235c37\"" Sep 12 17:17:39.915135 systemd[1]: Removed slice kubepods-besteffort-pod4491bb11_dbb1_464d_955a_f7ca7d7c4aab.slice - libcontainer container kubepods-besteffort-pod4491bb11_dbb1_464d_955a_f7ca7d7c4aab.slice. Sep 12 17:17:39.919164 containerd[1496]: time="2025-09-12T17:17:39.918555302Z" level=info msg="RemoveContainer for \"a444c7f034b0f1a15ccf1dae6ec5702c388917d49ab07a8d7b5d80156d235c37\" returns successfully" Sep 12 17:17:39.919739 kubelet[2668]: I0912 17:17:39.919599 2668 scope.go:117] "RemoveContainer" containerID="a444c7f034b0f1a15ccf1dae6ec5702c388917d49ab07a8d7b5d80156d235c37" Sep 12 17:17:39.920254 containerd[1496]: time="2025-09-12T17:17:39.920161054Z" level=error msg="ContainerStatus for \"a444c7f034b0f1a15ccf1dae6ec5702c388917d49ab07a8d7b5d80156d235c37\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a444c7f034b0f1a15ccf1dae6ec5702c388917d49ab07a8d7b5d80156d235c37\": not found" Sep 12 17:17:39.920539 kubelet[2668]: E0912 17:17:39.920461 2668 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a444c7f034b0f1a15ccf1dae6ec5702c388917d49ab07a8d7b5d80156d235c37\": not found" containerID="a444c7f034b0f1a15ccf1dae6ec5702c388917d49ab07a8d7b5d80156d235c37" Sep 12 17:17:39.920539 kubelet[2668]: I0912 17:17:39.920495 2668 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a444c7f034b0f1a15ccf1dae6ec5702c388917d49ab07a8d7b5d80156d235c37"} err="failed to get container status \"a444c7f034b0f1a15ccf1dae6ec5702c388917d49ab07a8d7b5d80156d235c37\": rpc error: code = NotFound desc = an error occurred when try to find container \"a444c7f034b0f1a15ccf1dae6ec5702c388917d49ab07a8d7b5d80156d235c37\": not found" Sep 12 17:17:39.921841 kubelet[2668]: I0912 17:17:39.921773 2668 scope.go:117] "RemoveContainer" containerID="8eac21fa95e339904c2665a02e1cc46f0640e80bf8304f1af0caa38e98962a9c" Sep 12 17:17:39.924591 containerd[1496]: time="2025-09-12T17:17:39.924418391Z" level=info msg="RemoveContainer for \"8eac21fa95e339904c2665a02e1cc46f0640e80bf8304f1af0caa38e98962a9c\"" Sep 12 17:17:39.927751 systemd[1]: Removed slice kubepods-burstable-pod84c2cf6f_71dc_49a6_8f00_978ddfb08898.slice - libcontainer container kubepods-burstable-pod84c2cf6f_71dc_49a6_8f00_978ddfb08898.slice. Sep 12 17:17:39.927842 systemd[1]: kubepods-burstable-pod84c2cf6f_71dc_49a6_8f00_978ddfb08898.slice: Consumed 8.782s CPU time, 124.3M memory peak, 128K read from disk, 12.9M written to disk. Sep 12 17:17:39.933014 containerd[1496]: time="2025-09-12T17:17:39.932480715Z" level=info msg="RemoveContainer for \"8eac21fa95e339904c2665a02e1cc46f0640e80bf8304f1af0caa38e98962a9c\" returns successfully" Sep 12 17:17:39.933165 kubelet[2668]: I0912 17:17:39.933133 2668 scope.go:117] "RemoveContainer" containerID="7a31710100fc667ef58a2366276c95ed81740c38afdd0a10d5e72b70d87fd924" Sep 12 17:17:39.935872 containerd[1496]: time="2025-09-12T17:17:39.935829669Z" level=info msg="RemoveContainer for \"7a31710100fc667ef58a2366276c95ed81740c38afdd0a10d5e72b70d87fd924\"" Sep 12 17:17:39.942473 containerd[1496]: time="2025-09-12T17:17:39.942402888Z" level=info msg="RemoveContainer for \"7a31710100fc667ef58a2366276c95ed81740c38afdd0a10d5e72b70d87fd924\" returns successfully" Sep 12 17:17:39.943152 kubelet[2668]: I0912 17:17:39.942812 2668 scope.go:117] "RemoveContainer" containerID="66000d8fecd9877d4ec6e25d3acd7d2df35d0297424a9d7237dd79255e6cc4f5" Sep 12 17:17:39.944593 containerd[1496]: time="2025-09-12T17:17:39.944563439Z" level=info msg="RemoveContainer for \"66000d8fecd9877d4ec6e25d3acd7d2df35d0297424a9d7237dd79255e6cc4f5\"" Sep 12 17:17:39.948546 containerd[1496]: time="2025-09-12T17:17:39.948513995Z" level=info msg="RemoveContainer for \"66000d8fecd9877d4ec6e25d3acd7d2df35d0297424a9d7237dd79255e6cc4f5\" returns successfully" Sep 12 17:17:39.948863 kubelet[2668]: I0912 17:17:39.948760 2668 scope.go:117] "RemoveContainer" containerID="c5bc51e8b1f2b94d707ddcd35b9f43d2b7eed8b605c4deb5528908dff126293c" Sep 12 17:17:39.953319 containerd[1496]: time="2025-09-12T17:17:39.952582479Z" level=info msg="RemoveContainer for \"c5bc51e8b1f2b94d707ddcd35b9f43d2b7eed8b605c4deb5528908dff126293c\"" Sep 12 17:17:39.958175 containerd[1496]: time="2025-09-12T17:17:39.958077663Z" level=info msg="RemoveContainer for \"c5bc51e8b1f2b94d707ddcd35b9f43d2b7eed8b605c4deb5528908dff126293c\" returns successfully" Sep 12 17:17:39.958458 kubelet[2668]: I0912 17:17:39.958436 2668 scope.go:117] "RemoveContainer" containerID="ba4cae6079b9167c3ad185f817f6e57aa03551086ae1c465c75116103cf7127f" Sep 12 17:17:39.959626 containerd[1496]: time="2025-09-12T17:17:39.959605450Z" level=info msg="RemoveContainer for \"ba4cae6079b9167c3ad185f817f6e57aa03551086ae1c465c75116103cf7127f\"" Sep 12 17:17:39.962977 containerd[1496]: time="2025-09-12T17:17:39.962949724Z" level=info msg="RemoveContainer for \"ba4cae6079b9167c3ad185f817f6e57aa03551086ae1c465c75116103cf7127f\" returns successfully" Sep 12 17:17:39.963429 kubelet[2668]: I0912 17:17:39.963409 2668 scope.go:117] "RemoveContainer" containerID="8eac21fa95e339904c2665a02e1cc46f0640e80bf8304f1af0caa38e98962a9c" Sep 12 17:17:39.963823 containerd[1496]: time="2025-09-12T17:17:39.963747099Z" level=error msg="ContainerStatus for \"8eac21fa95e339904c2665a02e1cc46f0640e80bf8304f1af0caa38e98962a9c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8eac21fa95e339904c2665a02e1cc46f0640e80bf8304f1af0caa38e98962a9c\": not found" Sep 12 17:17:39.964009 kubelet[2668]: E0912 17:17:39.963946 2668 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8eac21fa95e339904c2665a02e1cc46f0640e80bf8304f1af0caa38e98962a9c\": not found" containerID="8eac21fa95e339904c2665a02e1cc46f0640e80bf8304f1af0caa38e98962a9c" Sep 12 17:17:39.964090 kubelet[2668]: I0912 17:17:39.964028 2668 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8eac21fa95e339904c2665a02e1cc46f0640e80bf8304f1af0caa38e98962a9c"} err="failed to get container status \"8eac21fa95e339904c2665a02e1cc46f0640e80bf8304f1af0caa38e98962a9c\": rpc error: code = NotFound desc = an error occurred when try to find container \"8eac21fa95e339904c2665a02e1cc46f0640e80bf8304f1af0caa38e98962a9c\": not found" Sep 12 17:17:39.964136 kubelet[2668]: I0912 17:17:39.964098 2668 scope.go:117] "RemoveContainer" containerID="7a31710100fc667ef58a2366276c95ed81740c38afdd0a10d5e72b70d87fd924" Sep 12 17:17:39.964481 containerd[1496]: time="2025-09-12T17:17:39.964438268Z" level=error msg="ContainerStatus for \"7a31710100fc667ef58a2366276c95ed81740c38afdd0a10d5e72b70d87fd924\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7a31710100fc667ef58a2366276c95ed81740c38afdd0a10d5e72b70d87fd924\": not found" Sep 12 17:17:39.964670 kubelet[2668]: E0912 17:17:39.964642 2668 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7a31710100fc667ef58a2366276c95ed81740c38afdd0a10d5e72b70d87fd924\": not found" containerID="7a31710100fc667ef58a2366276c95ed81740c38afdd0a10d5e72b70d87fd924" Sep 12 17:17:39.964712 kubelet[2668]: I0912 17:17:39.964686 2668 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7a31710100fc667ef58a2366276c95ed81740c38afdd0a10d5e72b70d87fd924"} err="failed to get container status \"7a31710100fc667ef58a2366276c95ed81740c38afdd0a10d5e72b70d87fd924\": rpc error: code = NotFound desc = an error occurred when try to find container \"7a31710100fc667ef58a2366276c95ed81740c38afdd0a10d5e72b70d87fd924\": not found" Sep 12 17:17:39.964738 kubelet[2668]: I0912 17:17:39.964714 2668 scope.go:117] "RemoveContainer" containerID="66000d8fecd9877d4ec6e25d3acd7d2df35d0297424a9d7237dd79255e6cc4f5" Sep 12 17:17:39.964967 containerd[1496]: time="2025-09-12T17:17:39.964906140Z" level=error msg="ContainerStatus for \"66000d8fecd9877d4ec6e25d3acd7d2df35d0297424a9d7237dd79255e6cc4f5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"66000d8fecd9877d4ec6e25d3acd7d2df35d0297424a9d7237dd79255e6cc4f5\": not found" Sep 12 17:17:39.965145 kubelet[2668]: E0912 17:17:39.965041 2668 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"66000d8fecd9877d4ec6e25d3acd7d2df35d0297424a9d7237dd79255e6cc4f5\": not found" containerID="66000d8fecd9877d4ec6e25d3acd7d2df35d0297424a9d7237dd79255e6cc4f5" Sep 12 17:17:39.965221 kubelet[2668]: I0912 17:17:39.965157 2668 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"66000d8fecd9877d4ec6e25d3acd7d2df35d0297424a9d7237dd79255e6cc4f5"} err="failed to get container status \"66000d8fecd9877d4ec6e25d3acd7d2df35d0297424a9d7237dd79255e6cc4f5\": rpc error: code = NotFound desc = an error occurred when try to find container \"66000d8fecd9877d4ec6e25d3acd7d2df35d0297424a9d7237dd79255e6cc4f5\": not found" Sep 12 17:17:39.965252 kubelet[2668]: I0912 17:17:39.965229 2668 scope.go:117] "RemoveContainer" containerID="c5bc51e8b1f2b94d707ddcd35b9f43d2b7eed8b605c4deb5528908dff126293c" Sep 12 17:17:39.965500 containerd[1496]: time="2025-09-12T17:17:39.965449578Z" level=error msg="ContainerStatus for \"c5bc51e8b1f2b94d707ddcd35b9f43d2b7eed8b605c4deb5528908dff126293c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c5bc51e8b1f2b94d707ddcd35b9f43d2b7eed8b605c4deb5528908dff126293c\": not found" Sep 12 17:17:39.966036 kubelet[2668]: E0912 17:17:39.965744 2668 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c5bc51e8b1f2b94d707ddcd35b9f43d2b7eed8b605c4deb5528908dff126293c\": not found" containerID="c5bc51e8b1f2b94d707ddcd35b9f43d2b7eed8b605c4deb5528908dff126293c" Sep 12 17:17:39.966036 kubelet[2668]: I0912 17:17:39.965777 2668 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c5bc51e8b1f2b94d707ddcd35b9f43d2b7eed8b605c4deb5528908dff126293c"} err="failed to get container status \"c5bc51e8b1f2b94d707ddcd35b9f43d2b7eed8b605c4deb5528908dff126293c\": rpc error: code = NotFound desc = an error occurred when try to find container \"c5bc51e8b1f2b94d707ddcd35b9f43d2b7eed8b605c4deb5528908dff126293c\": not found" Sep 12 17:17:39.966036 kubelet[2668]: I0912 17:17:39.965796 2668 scope.go:117] "RemoveContainer" containerID="ba4cae6079b9167c3ad185f817f6e57aa03551086ae1c465c75116103cf7127f" Sep 12 17:17:39.966176 containerd[1496]: time="2025-09-12T17:17:39.965971215Z" level=error msg="ContainerStatus for \"ba4cae6079b9167c3ad185f817f6e57aa03551086ae1c465c75116103cf7127f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ba4cae6079b9167c3ad185f817f6e57aa03551086ae1c465c75116103cf7127f\": not found" Sep 12 17:17:39.966296 kubelet[2668]: E0912 17:17:39.966274 2668 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ba4cae6079b9167c3ad185f817f6e57aa03551086ae1c465c75116103cf7127f\": not found" containerID="ba4cae6079b9167c3ad185f817f6e57aa03551086ae1c465c75116103cf7127f" Sep 12 17:17:39.966366 kubelet[2668]: I0912 17:17:39.966301 2668 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ba4cae6079b9167c3ad185f817f6e57aa03551086ae1c465c75116103cf7127f"} err="failed to get container status \"ba4cae6079b9167c3ad185f817f6e57aa03551086ae1c465c75116103cf7127f\": rpc error: code = NotFound desc = an error occurred when try to find container \"ba4cae6079b9167c3ad185f817f6e57aa03551086ae1c465c75116103cf7127f\": not found" Sep 12 17:17:40.359744 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7431605835e3ddbe30319e38c959ddb6783e277c6916cdbda2902d46321cd2b-rootfs.mount: Deactivated successfully. Sep 12 17:17:40.359881 systemd[1]: var-lib-kubelet-pods-4491bb11\x2ddbb1\x2d464d\x2d955a\x2df7ca7d7c4aab-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq4vfs.mount: Deactivated successfully. Sep 12 17:17:40.359979 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-faaef948cc03b13ea66467decdcbc8989ffe1433124cc12acdb62a6cdfb7312e-rootfs.mount: Deactivated successfully. Sep 12 17:17:40.360043 systemd[1]: var-lib-kubelet-pods-84c2cf6f\x2d71dc\x2d49a6\x2d8f00\x2d978ddfb08898-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr8bt2.mount: Deactivated successfully. Sep 12 17:17:40.360151 systemd[1]: var-lib-kubelet-pods-84c2cf6f\x2d71dc\x2d49a6\x2d8f00\x2d978ddfb08898-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 12 17:17:40.360231 systemd[1]: var-lib-kubelet-pods-84c2cf6f\x2d71dc\x2d49a6\x2d8f00\x2d978ddfb08898-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 12 17:17:41.332748 kubelet[2668]: I0912 17:17:41.332706 2668 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4491bb11-dbb1-464d-955a-f7ca7d7c4aab" path="/var/lib/kubelet/pods/4491bb11-dbb1-464d-955a-f7ca7d7c4aab/volumes" Sep 12 17:17:41.333582 kubelet[2668]: I0912 17:17:41.333470 2668 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84c2cf6f-71dc-49a6-8f00-978ddfb08898" path="/var/lib/kubelet/pods/84c2cf6f-71dc-49a6-8f00-978ddfb08898/volumes" Sep 12 17:17:41.443582 sshd[4257]: Connection closed by 139.178.68.195 port 46490 Sep 12 17:17:41.444123 sshd-session[4255]: pam_unix(sshd:session): session closed for user core Sep 12 17:17:41.449684 systemd[1]: sshd@20-168.119.179.98:22-139.178.68.195:46490.service: Deactivated successfully. Sep 12 17:17:41.452286 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:17:41.452605 systemd[1]: session-20.scope: Consumed 1.446s CPU time, 23.6M memory peak. Sep 12 17:17:41.453419 systemd-logind[1477]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:17:41.455117 systemd-logind[1477]: Removed session 20. Sep 12 17:17:41.623603 systemd[1]: Started sshd@21-168.119.179.98:22-139.178.68.195:38584.service - OpenSSH per-connection server daemon (139.178.68.195:38584). Sep 12 17:17:42.602704 sshd[4420]: Accepted publickey for core from 139.178.68.195 port 38584 ssh2: RSA SHA256:4I4ir6DTNicv1nR1BCNJAvLWYZ+QnMBQBVoKHg57aHc Sep 12 17:17:42.604691 sshd-session[4420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:17:42.612183 systemd-logind[1477]: New session 21 of user core. Sep 12 17:17:42.618552 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 17:17:43.548688 kubelet[2668]: E0912 17:17:43.548615 2668 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 17:17:44.069129 systemd[1]: Created slice kubepods-burstable-podbe7559e3_3156_49b1_8e13_06cde328a7b8.slice - libcontainer container kubepods-burstable-podbe7559e3_3156_49b1_8e13_06cde328a7b8.slice. Sep 12 17:17:44.204284 kubelet[2668]: I0912 17:17:44.203639 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be7559e3-3156-49b1-8e13-06cde328a7b8-etc-cni-netd\") pod \"cilium-94tgr\" (UID: \"be7559e3-3156-49b1-8e13-06cde328a7b8\") " pod="kube-system/cilium-94tgr" Sep 12 17:17:44.204284 kubelet[2668]: I0912 17:17:44.203710 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/be7559e3-3156-49b1-8e13-06cde328a7b8-host-proc-sys-net\") pod \"cilium-94tgr\" (UID: \"be7559e3-3156-49b1-8e13-06cde328a7b8\") " pod="kube-system/cilium-94tgr" Sep 12 17:17:44.204284 kubelet[2668]: I0912 17:17:44.203742 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/be7559e3-3156-49b1-8e13-06cde328a7b8-bpf-maps\") pod \"cilium-94tgr\" (UID: \"be7559e3-3156-49b1-8e13-06cde328a7b8\") " pod="kube-system/cilium-94tgr" Sep 12 17:17:44.204284 kubelet[2668]: I0912 17:17:44.203784 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be7559e3-3156-49b1-8e13-06cde328a7b8-lib-modules\") pod \"cilium-94tgr\" (UID: \"be7559e3-3156-49b1-8e13-06cde328a7b8\") " pod="kube-system/cilium-94tgr" Sep 12 17:17:44.204284 kubelet[2668]: I0912 17:17:44.203817 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be7559e3-3156-49b1-8e13-06cde328a7b8-xtables-lock\") pod \"cilium-94tgr\" (UID: \"be7559e3-3156-49b1-8e13-06cde328a7b8\") " pod="kube-system/cilium-94tgr" Sep 12 17:17:44.204284 kubelet[2668]: I0912 17:17:44.203844 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/be7559e3-3156-49b1-8e13-06cde328a7b8-clustermesh-secrets\") pod \"cilium-94tgr\" (UID: \"be7559e3-3156-49b1-8e13-06cde328a7b8\") " pod="kube-system/cilium-94tgr" Sep 12 17:17:44.204740 kubelet[2668]: I0912 17:17:44.203874 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz9rt\" (UniqueName: \"kubernetes.io/projected/be7559e3-3156-49b1-8e13-06cde328a7b8-kube-api-access-pz9rt\") pod \"cilium-94tgr\" (UID: \"be7559e3-3156-49b1-8e13-06cde328a7b8\") " pod="kube-system/cilium-94tgr" Sep 12 17:17:44.204740 kubelet[2668]: I0912 17:17:44.203907 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/be7559e3-3156-49b1-8e13-06cde328a7b8-cni-path\") pod \"cilium-94tgr\" (UID: \"be7559e3-3156-49b1-8e13-06cde328a7b8\") " pod="kube-system/cilium-94tgr" Sep 12 17:17:44.204740 kubelet[2668]: I0912 17:17:44.203933 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/be7559e3-3156-49b1-8e13-06cde328a7b8-cilium-config-path\") pod \"cilium-94tgr\" (UID: \"be7559e3-3156-49b1-8e13-06cde328a7b8\") " pod="kube-system/cilium-94tgr" Sep 12 17:17:44.204740 kubelet[2668]: I0912 17:17:44.203959 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/be7559e3-3156-49b1-8e13-06cde328a7b8-cilium-ipsec-secrets\") pod \"cilium-94tgr\" (UID: \"be7559e3-3156-49b1-8e13-06cde328a7b8\") " pod="kube-system/cilium-94tgr" Sep 12 17:17:44.204740 kubelet[2668]: I0912 17:17:44.203985 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/be7559e3-3156-49b1-8e13-06cde328a7b8-host-proc-sys-kernel\") pod \"cilium-94tgr\" (UID: \"be7559e3-3156-49b1-8e13-06cde328a7b8\") " pod="kube-system/cilium-94tgr" Sep 12 17:17:44.204975 kubelet[2668]: I0912 17:17:44.204016 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/be7559e3-3156-49b1-8e13-06cde328a7b8-hostproc\") pod \"cilium-94tgr\" (UID: \"be7559e3-3156-49b1-8e13-06cde328a7b8\") " pod="kube-system/cilium-94tgr" Sep 12 17:17:44.204975 kubelet[2668]: I0912 17:17:44.204041 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/be7559e3-3156-49b1-8e13-06cde328a7b8-cilium-cgroup\") pod \"cilium-94tgr\" (UID: \"be7559e3-3156-49b1-8e13-06cde328a7b8\") " pod="kube-system/cilium-94tgr" Sep 12 17:17:44.204975 kubelet[2668]: I0912 17:17:44.204066 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/be7559e3-3156-49b1-8e13-06cde328a7b8-hubble-tls\") pod \"cilium-94tgr\" (UID: \"be7559e3-3156-49b1-8e13-06cde328a7b8\") " pod="kube-system/cilium-94tgr" Sep 12 17:17:44.204975 kubelet[2668]: I0912 17:17:44.204093 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/be7559e3-3156-49b1-8e13-06cde328a7b8-cilium-run\") pod \"cilium-94tgr\" (UID: \"be7559e3-3156-49b1-8e13-06cde328a7b8\") " pod="kube-system/cilium-94tgr" Sep 12 17:17:44.238007 sshd[4422]: Connection closed by 139.178.68.195 port 38584 Sep 12 17:17:44.239038 sshd-session[4420]: pam_unix(sshd:session): session closed for user core Sep 12 17:17:44.244585 systemd[1]: sshd@21-168.119.179.98:22-139.178.68.195:38584.service: Deactivated successfully. Sep 12 17:17:44.248442 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 17:17:44.250857 systemd-logind[1477]: Session 21 logged out. Waiting for processes to exit. Sep 12 17:17:44.253031 systemd-logind[1477]: Removed session 21. Sep 12 17:17:44.376297 containerd[1496]: time="2025-09-12T17:17:44.375542955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-94tgr,Uid:be7559e3-3156-49b1-8e13-06cde328a7b8,Namespace:kube-system,Attempt:0,}" Sep 12 17:17:44.403605 containerd[1496]: time="2025-09-12T17:17:44.403474877Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:17:44.403896 containerd[1496]: time="2025-09-12T17:17:44.403658890Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:17:44.403896 containerd[1496]: time="2025-09-12T17:17:44.403714414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:17:44.403896 containerd[1496]: time="2025-09-12T17:17:44.403812821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:17:44.423308 systemd[1]: Started sshd@22-168.119.179.98:22-139.178.68.195:38588.service - OpenSSH per-connection server daemon (139.178.68.195:38588). Sep 12 17:17:44.427976 systemd[1]: Started cri-containerd-f6797a0f5cbffd6637f528a451651bbcbc8409fbbe215c86dc51cc55984c8f56.scope - libcontainer container f6797a0f5cbffd6637f528a451651bbcbc8409fbbe215c86dc51cc55984c8f56. Sep 12 17:17:44.462069 containerd[1496]: time="2025-09-12T17:17:44.461928585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-94tgr,Uid:be7559e3-3156-49b1-8e13-06cde328a7b8,Namespace:kube-system,Attempt:0,} returns sandbox id \"f6797a0f5cbffd6637f528a451651bbcbc8409fbbe215c86dc51cc55984c8f56\"" Sep 12 17:17:44.469634 containerd[1496]: time="2025-09-12T17:17:44.469568573Z" level=info msg="CreateContainer within sandbox \"f6797a0f5cbffd6637f528a451651bbcbc8409fbbe215c86dc51cc55984c8f56\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 17:17:44.480010 containerd[1496]: time="2025-09-12T17:17:44.479922555Z" level=info msg="CreateContainer within sandbox \"f6797a0f5cbffd6637f528a451651bbcbc8409fbbe215c86dc51cc55984c8f56\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"45f839cb6ba927550c4f84df45fa6fa5f96ebb7ec9e457bd8d12fb6904582b4d\"" Sep 12 17:17:44.482236 containerd[1496]: time="2025-09-12T17:17:44.481458585Z" level=info msg="StartContainer for \"45f839cb6ba927550c4f84df45fa6fa5f96ebb7ec9e457bd8d12fb6904582b4d\"" Sep 12 17:17:44.517488 systemd[1]: Started cri-containerd-45f839cb6ba927550c4f84df45fa6fa5f96ebb7ec9e457bd8d12fb6904582b4d.scope - libcontainer container 45f839cb6ba927550c4f84df45fa6fa5f96ebb7ec9e457bd8d12fb6904582b4d. Sep 12 17:17:44.553976 containerd[1496]: time="2025-09-12T17:17:44.553829411Z" level=info msg="StartContainer for \"45f839cb6ba927550c4f84df45fa6fa5f96ebb7ec9e457bd8d12fb6904582b4d\" returns successfully" Sep 12 17:17:44.566459 systemd[1]: cri-containerd-45f839cb6ba927550c4f84df45fa6fa5f96ebb7ec9e457bd8d12fb6904582b4d.scope: Deactivated successfully. Sep 12 17:17:44.611716 containerd[1496]: time="2025-09-12T17:17:44.611599991Z" level=info msg="shim disconnected" id=45f839cb6ba927550c4f84df45fa6fa5f96ebb7ec9e457bd8d12fb6904582b4d namespace=k8s.io Sep 12 17:17:44.611716 containerd[1496]: time="2025-09-12T17:17:44.611700438Z" level=warning msg="cleaning up after shim disconnected" id=45f839cb6ba927550c4f84df45fa6fa5f96ebb7ec9e457bd8d12fb6904582b4d namespace=k8s.io Sep 12 17:17:44.611716 containerd[1496]: time="2025-09-12T17:17:44.611713719Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:17:44.946918 containerd[1496]: time="2025-09-12T17:17:44.946874696Z" level=info msg="CreateContainer within sandbox \"f6797a0f5cbffd6637f528a451651bbcbc8409fbbe215c86dc51cc55984c8f56\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 17:17:44.962154 containerd[1496]: time="2025-09-12T17:17:44.962078786Z" level=info msg="CreateContainer within sandbox \"f6797a0f5cbffd6637f528a451651bbcbc8409fbbe215c86dc51cc55984c8f56\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"51daccdb6cda801b2c6e05dc647c9bb718db384aadce5a5b708383d1073ce1e2\"" Sep 12 17:17:44.964957 containerd[1496]: time="2025-09-12T17:17:44.963989083Z" level=info msg="StartContainer for \"51daccdb6cda801b2c6e05dc647c9bb718db384aadce5a5b708383d1073ce1e2\"" Sep 12 17:17:44.993523 systemd[1]: Started cri-containerd-51daccdb6cda801b2c6e05dc647c9bb718db384aadce5a5b708383d1073ce1e2.scope - libcontainer container 51daccdb6cda801b2c6e05dc647c9bb718db384aadce5a5b708383d1073ce1e2. Sep 12 17:17:45.019456 containerd[1496]: time="2025-09-12T17:17:45.019289771Z" level=info msg="StartContainer for \"51daccdb6cda801b2c6e05dc647c9bb718db384aadce5a5b708383d1073ce1e2\" returns successfully" Sep 12 17:17:45.028196 systemd[1]: cri-containerd-51daccdb6cda801b2c6e05dc647c9bb718db384aadce5a5b708383d1073ce1e2.scope: Deactivated successfully. Sep 12 17:17:45.055885 containerd[1496]: time="2025-09-12T17:17:45.055637548Z" level=info msg="shim disconnected" id=51daccdb6cda801b2c6e05dc647c9bb718db384aadce5a5b708383d1073ce1e2 namespace=k8s.io Sep 12 17:17:45.055885 containerd[1496]: time="2025-09-12T17:17:45.055707793Z" level=warning msg="cleaning up after shim disconnected" id=51daccdb6cda801b2c6e05dc647c9bb718db384aadce5a5b708383d1073ce1e2 namespace=k8s.io Sep 12 17:17:45.055885 containerd[1496]: time="2025-09-12T17:17:45.055716193Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:17:45.426939 sshd[4465]: Accepted publickey for core from 139.178.68.195 port 38588 ssh2: RSA SHA256:4I4ir6DTNicv1nR1BCNJAvLWYZ+QnMBQBVoKHg57aHc Sep 12 17:17:45.428944 sshd-session[4465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:17:45.436106 systemd-logind[1477]: New session 22 of user core. Sep 12 17:17:45.441423 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 17:17:45.955248 containerd[1496]: time="2025-09-12T17:17:45.955181180Z" level=info msg="CreateContainer within sandbox \"f6797a0f5cbffd6637f528a451651bbcbc8409fbbe215c86dc51cc55984c8f56\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 17:17:45.975609 containerd[1496]: time="2025-09-12T17:17:45.975471000Z" level=info msg="CreateContainer within sandbox \"f6797a0f5cbffd6637f528a451651bbcbc8409fbbe215c86dc51cc55984c8f56\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"232cb14c4aaae1017a9d5493609665a25341810fdc8f094e2ad02ebb2ec969d0\"" Sep 12 17:17:45.978588 containerd[1496]: time="2025-09-12T17:17:45.978410852Z" level=info msg="StartContainer for \"232cb14c4aaae1017a9d5493609665a25341810fdc8f094e2ad02ebb2ec969d0\"" Sep 12 17:17:46.019568 systemd[1]: Started cri-containerd-232cb14c4aaae1017a9d5493609665a25341810fdc8f094e2ad02ebb2ec969d0.scope - libcontainer container 232cb14c4aaae1017a9d5493609665a25341810fdc8f094e2ad02ebb2ec969d0. Sep 12 17:17:46.056236 containerd[1496]: time="2025-09-12T17:17:46.056089781Z" level=info msg="StartContainer for \"232cb14c4aaae1017a9d5493609665a25341810fdc8f094e2ad02ebb2ec969d0\" returns successfully" Sep 12 17:17:46.059644 systemd[1]: cri-containerd-232cb14c4aaae1017a9d5493609665a25341810fdc8f094e2ad02ebb2ec969d0.scope: Deactivated successfully. Sep 12 17:17:46.085645 containerd[1496]: time="2025-09-12T17:17:46.085570912Z" level=info msg="shim disconnected" id=232cb14c4aaae1017a9d5493609665a25341810fdc8f094e2ad02ebb2ec969d0 namespace=k8s.io Sep 12 17:17:46.085645 containerd[1496]: time="2025-09-12T17:17:46.085645758Z" level=warning msg="cleaning up after shim disconnected" id=232cb14c4aaae1017a9d5493609665a25341810fdc8f094e2ad02ebb2ec969d0 namespace=k8s.io Sep 12 17:17:46.085645 containerd[1496]: time="2025-09-12T17:17:46.085655919Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:17:46.119239 sshd[4606]: Connection closed by 139.178.68.195 port 38588 Sep 12 17:17:46.121431 sshd-session[4465]: pam_unix(sshd:session): session closed for user core Sep 12 17:17:46.126805 systemd[1]: sshd@22-168.119.179.98:22-139.178.68.195:38588.service: Deactivated successfully. Sep 12 17:17:46.129122 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 17:17:46.131951 systemd-logind[1477]: Session 22 logged out. Waiting for processes to exit. Sep 12 17:17:46.135129 systemd-logind[1477]: Removed session 22. Sep 12 17:17:46.290147 systemd[1]: Started sshd@23-168.119.179.98:22-139.178.68.195:38594.service - OpenSSH per-connection server daemon (139.178.68.195:38594). Sep 12 17:17:46.314262 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-232cb14c4aaae1017a9d5493609665a25341810fdc8f094e2ad02ebb2ec969d0-rootfs.mount: Deactivated successfully. Sep 12 17:17:46.960716 containerd[1496]: time="2025-09-12T17:17:46.959547099Z" level=info msg="CreateContainer within sandbox \"f6797a0f5cbffd6637f528a451651bbcbc8409fbbe215c86dc51cc55984c8f56\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 17:17:46.978116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2124622056.mount: Deactivated successfully. Sep 12 17:17:46.982382 containerd[1496]: time="2025-09-12T17:17:46.982148773Z" level=info msg="CreateContainer within sandbox \"f6797a0f5cbffd6637f528a451651bbcbc8409fbbe215c86dc51cc55984c8f56\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"09cf6e09d96104fd25c089381312b232061b31c87975fa8d184e58df3041a334\"" Sep 12 17:17:46.983544 containerd[1496]: time="2025-09-12T17:17:46.983309857Z" level=info msg="StartContainer for \"09cf6e09d96104fd25c089381312b232061b31c87975fa8d184e58df3041a334\"" Sep 12 17:17:47.024394 systemd[1]: Started cri-containerd-09cf6e09d96104fd25c089381312b232061b31c87975fa8d184e58df3041a334.scope - libcontainer container 09cf6e09d96104fd25c089381312b232061b31c87975fa8d184e58df3041a334. Sep 12 17:17:47.053341 systemd[1]: cri-containerd-09cf6e09d96104fd25c089381312b232061b31c87975fa8d184e58df3041a334.scope: Deactivated successfully. Sep 12 17:17:47.059196 containerd[1496]: time="2025-09-12T17:17:47.059156958Z" level=info msg="StartContainer for \"09cf6e09d96104fd25c089381312b232061b31c87975fa8d184e58df3041a334\" returns successfully" Sep 12 17:17:47.082237 containerd[1496]: time="2025-09-12T17:17:47.082137587Z" level=info msg="shim disconnected" id=09cf6e09d96104fd25c089381312b232061b31c87975fa8d184e58df3041a334 namespace=k8s.io Sep 12 17:17:47.082237 containerd[1496]: time="2025-09-12T17:17:47.082237674Z" level=warning msg="cleaning up after shim disconnected" id=09cf6e09d96104fd25c089381312b232061b31c87975fa8d184e58df3041a334 namespace=k8s.io Sep 12 17:17:47.082638 containerd[1496]: time="2025-09-12T17:17:47.082253595Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:17:47.159026 kubelet[2668]: I0912 17:17:47.157052 2668 setters.go:618] "Node became not ready" node="ci-4230-2-3-6-9297726d8a" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-12T17:17:47Z","lastTransitionTime":"2025-09-12T17:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 12 17:17:47.269825 sshd[4671]: Accepted publickey for core from 139.178.68.195 port 38594 ssh2: RSA SHA256:4I4ir6DTNicv1nR1BCNJAvLWYZ+QnMBQBVoKHg57aHc Sep 12 17:17:47.272509 sshd-session[4671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:17:47.281280 systemd-logind[1477]: New session 23 of user core. Sep 12 17:17:47.287539 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 17:17:47.314705 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09cf6e09d96104fd25c089381312b232061b31c87975fa8d184e58df3041a334-rootfs.mount: Deactivated successfully. Sep 12 17:17:47.966896 containerd[1496]: time="2025-09-12T17:17:47.966835338Z" level=info msg="CreateContainer within sandbox \"f6797a0f5cbffd6637f528a451651bbcbc8409fbbe215c86dc51cc55984c8f56\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 17:17:47.990382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount635365401.mount: Deactivated successfully. Sep 12 17:17:47.992963 containerd[1496]: time="2025-09-12T17:17:47.992926232Z" level=info msg="CreateContainer within sandbox \"f6797a0f5cbffd6637f528a451651bbcbc8409fbbe215c86dc51cc55984c8f56\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1c5ca559ba02d950446d4b1205820ade8ba402a62dbe7788e53abd08d21167f8\"" Sep 12 17:17:47.994037 containerd[1496]: time="2025-09-12T17:17:47.993913984Z" level=info msg="StartContainer for \"1c5ca559ba02d950446d4b1205820ade8ba402a62dbe7788e53abd08d21167f8\"" Sep 12 17:17:48.027556 systemd[1]: Started cri-containerd-1c5ca559ba02d950446d4b1205820ade8ba402a62dbe7788e53abd08d21167f8.scope - libcontainer container 1c5ca559ba02d950446d4b1205820ade8ba402a62dbe7788e53abd08d21167f8. Sep 12 17:17:48.060234 containerd[1496]: time="2025-09-12T17:17:48.058976205Z" level=info msg="StartContainer for \"1c5ca559ba02d950446d4b1205820ade8ba402a62dbe7788e53abd08d21167f8\" returns successfully" Sep 12 17:17:48.395333 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 12 17:17:51.344251 systemd-networkd[1395]: lxc_health: Link UP Sep 12 17:17:51.344521 systemd-networkd[1395]: lxc_health: Gained carrier Sep 12 17:17:52.161118 systemd[1]: run-containerd-runc-k8s.io-1c5ca559ba02d950446d4b1205820ade8ba402a62dbe7788e53abd08d21167f8-runc.TOjaiz.mount: Deactivated successfully. Sep 12 17:17:52.401779 kubelet[2668]: I0912 17:17:52.401703 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-94tgr" podStartSLOduration=8.40168345 podStartE2EDuration="8.40168345s" podCreationTimestamp="2025-09-12 17:17:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:17:48.988507605 +0000 UTC m=+165.879546325" watchObservedRunningTime="2025-09-12 17:17:52.40168345 +0000 UTC m=+169.292722090" Sep 12 17:17:53.176443 systemd-networkd[1395]: lxc_health: Gained IPv6LL Sep 12 17:17:56.497855 systemd[1]: run-containerd-runc-k8s.io-1c5ca559ba02d950446d4b1205820ade8ba402a62dbe7788e53abd08d21167f8-runc.MAKXUT.mount: Deactivated successfully. Sep 12 17:17:56.726511 sshd[4725]: Connection closed by 139.178.68.195 port 38594 Sep 12 17:17:56.725985 sshd-session[4671]: pam_unix(sshd:session): session closed for user core Sep 12 17:17:56.730572 systemd[1]: sshd@23-168.119.179.98:22-139.178.68.195:38594.service: Deactivated successfully. Sep 12 17:17:56.733586 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 17:17:56.735010 systemd-logind[1477]: Session 23 logged out. Waiting for processes to exit. Sep 12 17:17:56.736612 systemd-logind[1477]: Removed session 23. Sep 12 17:18:03.333383 containerd[1496]: time="2025-09-12T17:18:03.333336797Z" level=info msg="StopPodSandbox for \"faaef948cc03b13ea66467decdcbc8989ffe1433124cc12acdb62a6cdfb7312e\"" Sep 12 17:18:03.335646 containerd[1496]: time="2025-09-12T17:18:03.333434245Z" level=info msg="TearDown network for sandbox \"faaef948cc03b13ea66467decdcbc8989ffe1433124cc12acdb62a6cdfb7312e\" successfully" Sep 12 17:18:03.335646 containerd[1496]: time="2025-09-12T17:18:03.333446086Z" level=info msg="StopPodSandbox for \"faaef948cc03b13ea66467decdcbc8989ffe1433124cc12acdb62a6cdfb7312e\" returns successfully" Sep 12 17:18:03.335646 containerd[1496]: time="2025-09-12T17:18:03.333964485Z" level=info msg="RemovePodSandbox for \"faaef948cc03b13ea66467decdcbc8989ffe1433124cc12acdb62a6cdfb7312e\"" Sep 12 17:18:03.335646 containerd[1496]: time="2025-09-12T17:18:03.333988047Z" level=info msg="Forcibly stopping sandbox \"faaef948cc03b13ea66467decdcbc8989ffe1433124cc12acdb62a6cdfb7312e\"" Sep 12 17:18:03.335646 containerd[1496]: time="2025-09-12T17:18:03.334026930Z" level=info msg="TearDown network for sandbox \"faaef948cc03b13ea66467decdcbc8989ffe1433124cc12acdb62a6cdfb7312e\" successfully" Sep 12 17:18:03.338589 containerd[1496]: time="2025-09-12T17:18:03.338547035Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"faaef948cc03b13ea66467decdcbc8989ffe1433124cc12acdb62a6cdfb7312e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:18:03.338647 containerd[1496]: time="2025-09-12T17:18:03.338624001Z" level=info msg="RemovePodSandbox \"faaef948cc03b13ea66467decdcbc8989ffe1433124cc12acdb62a6cdfb7312e\" returns successfully" Sep 12 17:18:03.340597 containerd[1496]: time="2025-09-12T17:18:03.340555029Z" level=info msg="StopPodSandbox for \"d7431605835e3ddbe30319e38c959ddb6783e277c6916cdbda2902d46321cd2b\"" Sep 12 17:18:03.340675 containerd[1496]: time="2025-09-12T17:18:03.340649116Z" level=info msg="TearDown network for sandbox \"d7431605835e3ddbe30319e38c959ddb6783e277c6916cdbda2902d46321cd2b\" successfully" Sep 12 17:18:03.340675 containerd[1496]: time="2025-09-12T17:18:03.340665677Z" level=info msg="StopPodSandbox for \"d7431605835e3ddbe30319e38c959ddb6783e277c6916cdbda2902d46321cd2b\" returns successfully" Sep 12 17:18:03.340986 containerd[1496]: time="2025-09-12T17:18:03.340962740Z" level=info msg="RemovePodSandbox for \"d7431605835e3ddbe30319e38c959ddb6783e277c6916cdbda2902d46321cd2b\"" Sep 12 17:18:03.341036 containerd[1496]: time="2025-09-12T17:18:03.340992582Z" level=info msg="Forcibly stopping sandbox \"d7431605835e3ddbe30319e38c959ddb6783e277c6916cdbda2902d46321cd2b\"" Sep 12 17:18:03.341060 containerd[1496]: time="2025-09-12T17:18:03.341043266Z" level=info msg="TearDown network for sandbox \"d7431605835e3ddbe30319e38c959ddb6783e277c6916cdbda2902d46321cd2b\" successfully" Sep 12 17:18:03.345450 containerd[1496]: time="2025-09-12T17:18:03.345416840Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d7431605835e3ddbe30319e38c959ddb6783e277c6916cdbda2902d46321cd2b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:18:03.345520 containerd[1496]: time="2025-09-12T17:18:03.345467764Z" level=info msg="RemovePodSandbox \"d7431605835e3ddbe30319e38c959ddb6783e277c6916cdbda2902d46321cd2b\" returns successfully" Sep 12 17:18:12.444143 systemd[1]: cri-containerd-f1fd4f5f47f69db12876c08e671c90ecb143cae9d680f49f21f8358df5ad3686.scope: Deactivated successfully. Sep 12 17:18:12.446324 systemd[1]: cri-containerd-f1fd4f5f47f69db12876c08e671c90ecb143cae9d680f49f21f8358df5ad3686.scope: Consumed 4.304s CPU time, 53.4M memory peak. Sep 12 17:18:12.468082 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1fd4f5f47f69db12876c08e671c90ecb143cae9d680f49f21f8358df5ad3686-rootfs.mount: Deactivated successfully. Sep 12 17:18:12.480280 containerd[1496]: time="2025-09-12T17:18:12.479963682Z" level=info msg="shim disconnected" id=f1fd4f5f47f69db12876c08e671c90ecb143cae9d680f49f21f8358df5ad3686 namespace=k8s.io Sep 12 17:18:12.480280 containerd[1496]: time="2025-09-12T17:18:12.480066730Z" level=warning msg="cleaning up after shim disconnected" id=f1fd4f5f47f69db12876c08e671c90ecb143cae9d680f49f21f8358df5ad3686 namespace=k8s.io Sep 12 17:18:12.480280 containerd[1496]: time="2025-09-12T17:18:12.480085771Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:18:12.508293 kubelet[2668]: E0912 17:18:12.507654 2668 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:50538->10.0.0.2:2379: read: connection timed out" Sep 12 17:18:12.513843 systemd[1]: cri-containerd-6fed9c448a2ca1f91d613c6532f5470f8a6bf9a6171edca97d56df0006b374d1.scope: Deactivated successfully. Sep 12 17:18:12.515246 systemd[1]: cri-containerd-6fed9c448a2ca1f91d613c6532f5470f8a6bf9a6171edca97d56df0006b374d1.scope: Consumed 4.592s CPU time, 21M memory peak. Sep 12 17:18:12.534941 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6fed9c448a2ca1f91d613c6532f5470f8a6bf9a6171edca97d56df0006b374d1-rootfs.mount: Deactivated successfully. Sep 12 17:18:12.545410 containerd[1496]: time="2025-09-12T17:18:12.545265080Z" level=info msg="shim disconnected" id=6fed9c448a2ca1f91d613c6532f5470f8a6bf9a6171edca97d56df0006b374d1 namespace=k8s.io Sep 12 17:18:12.545410 containerd[1496]: time="2025-09-12T17:18:12.545361368Z" level=warning msg="cleaning up after shim disconnected" id=6fed9c448a2ca1f91d613c6532f5470f8a6bf9a6171edca97d56df0006b374d1 namespace=k8s.io Sep 12 17:18:12.545410 containerd[1496]: time="2025-09-12T17:18:12.545380129Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:18:13.025533 kubelet[2668]: I0912 17:18:13.025494 2668 scope.go:117] "RemoveContainer" containerID="f1fd4f5f47f69db12876c08e671c90ecb143cae9d680f49f21f8358df5ad3686" Sep 12 17:18:13.028468 containerd[1496]: time="2025-09-12T17:18:13.028348296Z" level=info msg="CreateContainer within sandbox \"403161e05deaca108245b736e87aa5fc845b6052d7c2b85b9413fdd9f9700505\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 12 17:18:13.029843 kubelet[2668]: I0912 17:18:13.029816 2668 scope.go:117] "RemoveContainer" containerID="6fed9c448a2ca1f91d613c6532f5470f8a6bf9a6171edca97d56df0006b374d1" Sep 12 17:18:13.031956 containerd[1496]: time="2025-09-12T17:18:13.031890452Z" level=info msg="CreateContainer within sandbox \"c8d5c08535ea12743b9d96b65a53a3a1f354354d7b46d1ac22df83654acd84a8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 12 17:18:13.049467 containerd[1496]: time="2025-09-12T17:18:13.049363053Z" level=info msg="CreateContainer within sandbox \"403161e05deaca108245b736e87aa5fc845b6052d7c2b85b9413fdd9f9700505\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"6ab2dabfa540297362d7ebf9331a9879fb4e27aa1c35d1d212438fb454719edd\"" Sep 12 17:18:13.050264 containerd[1496]: time="2025-09-12T17:18:13.050076668Z" level=info msg="StartContainer for \"6ab2dabfa540297362d7ebf9331a9879fb4e27aa1c35d1d212438fb454719edd\"" Sep 12 17:18:13.052406 containerd[1496]: time="2025-09-12T17:18:13.052348805Z" level=info msg="CreateContainer within sandbox \"c8d5c08535ea12743b9d96b65a53a3a1f354354d7b46d1ac22df83654acd84a8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"b647504cffe3b50d4b4ba8d6802a146e64747c32b06f04ff77fe516b728ac7e8\"" Sep 12 17:18:13.053241 containerd[1496]: time="2025-09-12T17:18:13.052918250Z" level=info msg="StartContainer for \"b647504cffe3b50d4b4ba8d6802a146e64747c32b06f04ff77fe516b728ac7e8\"" Sep 12 17:18:13.078375 systemd[1]: Started cri-containerd-b647504cffe3b50d4b4ba8d6802a146e64747c32b06f04ff77fe516b728ac7e8.scope - libcontainer container b647504cffe3b50d4b4ba8d6802a146e64747c32b06f04ff77fe516b728ac7e8. Sep 12 17:18:13.081638 systemd[1]: Started cri-containerd-6ab2dabfa540297362d7ebf9331a9879fb4e27aa1c35d1d212438fb454719edd.scope - libcontainer container 6ab2dabfa540297362d7ebf9331a9879fb4e27aa1c35d1d212438fb454719edd. Sep 12 17:18:13.127404 containerd[1496]: time="2025-09-12T17:18:13.126998421Z" level=info msg="StartContainer for \"b647504cffe3b50d4b4ba8d6802a146e64747c32b06f04ff77fe516b728ac7e8\" returns successfully" Sep 12 17:18:13.133410 containerd[1496]: time="2025-09-12T17:18:13.133362157Z" level=info msg="StartContainer for \"6ab2dabfa540297362d7ebf9331a9879fb4e27aa1c35d1d212438fb454719edd\" returns successfully" Sep 12 17:18:16.603160 kubelet[2668]: E0912 17:18:16.602936 2668 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:50366->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4230-2-3-6-9297726d8a.1864988d0abccd09 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4230-2-3-6-9297726d8a,UID:28c935feed4027c4ff640f80bfcbead3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4230-2-3-6-9297726d8a,},FirstTimestamp:2025-09-12 17:18:06.119292169 +0000 UTC m=+183.010330809,LastTimestamp:2025-09-12 17:18:06.119292169 +0000 UTC m=+183.010330809,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-2-3-6-9297726d8a,}"