Sep 13 00:02:47.871738 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 13 00:02:47.871762 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 12 22:36:20 -00 2025 Sep 13 00:02:47.871773 kernel: KASLR enabled Sep 13 00:02:47.871779 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Sep 13 00:02:47.871784 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Sep 13 00:02:47.871790 kernel: random: crng init done Sep 13 00:02:47.871797 kernel: ACPI: Early table checksum verification disabled Sep 13 00:02:47.871803 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Sep 13 00:02:47.871809 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Sep 13 00:02:47.871817 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:02:47.871823 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:02:47.871829 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:02:47.871835 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:02:47.871841 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:02:47.871849 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:02:47.871857 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:02:47.871863 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:02:47.871870 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:02:47.871876 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Sep 13 00:02:47.871882 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Sep 13 00:02:47.871889 kernel: NUMA: Failed to initialise from firmware Sep 13 00:02:47.871895 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Sep 13 00:02:47.871902 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Sep 13 00:02:47.871908 kernel: Zone ranges: Sep 13 00:02:47.871914 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 13 00:02:47.871922 kernel: DMA32 empty Sep 13 00:02:47.871928 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Sep 13 00:02:47.871935 kernel: Movable zone start for each node Sep 13 00:02:47.871941 kernel: Early memory node ranges Sep 13 00:02:47.871947 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Sep 13 00:02:47.871954 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Sep 13 00:02:47.871960 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Sep 13 00:02:47.871966 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Sep 13 00:02:47.871973 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Sep 13 00:02:47.871979 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Sep 13 00:02:47.871985 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Sep 13 00:02:47.871991 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Sep 13 00:02:47.872000 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Sep 13 00:02:47.872007 kernel: psci: probing for conduit method from ACPI. Sep 13 00:02:47.872013 kernel: psci: PSCIv1.1 detected in firmware. Sep 13 00:02:47.872022 kernel: psci: Using standard PSCI v0.2 function IDs Sep 13 00:02:47.872029 kernel: psci: Trusted OS migration not required Sep 13 00:02:47.872036 kernel: psci: SMC Calling Convention v1.1 Sep 13 00:02:47.872044 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 13 00:02:47.872051 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 13 00:02:47.872058 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 13 00:02:47.872065 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 13 00:02:47.872071 kernel: Detected PIPT I-cache on CPU0 Sep 13 00:02:47.872078 kernel: CPU features: detected: GIC system register CPU interface Sep 13 00:02:47.872085 kernel: CPU features: detected: Hardware dirty bit management Sep 13 00:02:47.872092 kernel: CPU features: detected: Spectre-v4 Sep 13 00:02:47.872098 kernel: CPU features: detected: Spectre-BHB Sep 13 00:02:47.872105 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 13 00:02:47.872113 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 13 00:02:47.872120 kernel: CPU features: detected: ARM erratum 1418040 Sep 13 00:02:47.872127 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 13 00:02:47.872152 kernel: alternatives: applying boot alternatives Sep 13 00:02:47.872162 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=e1b46f3c9e154636c32f6cde6e746a00a6b37ca7432cb4e16d172c05f584a8c9 Sep 13 00:02:47.872169 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:02:47.872176 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 00:02:47.872183 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:02:47.872190 kernel: Fallback order for Node 0: 0 Sep 13 00:02:47.872197 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Sep 13 00:02:47.872203 kernel: Policy zone: Normal Sep 13 00:02:47.872212 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:02:47.872219 kernel: software IO TLB: area num 2. Sep 13 00:02:47.872226 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Sep 13 00:02:47.872281 kernel: Memory: 3882744K/4096000K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39488K init, 897K bss, 213256K reserved, 0K cma-reserved) Sep 13 00:02:47.872289 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 13 00:02:47.872296 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:02:47.872303 kernel: rcu: RCU event tracing is enabled. Sep 13 00:02:47.872310 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 13 00:02:47.872317 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:02:47.872324 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:02:47.872330 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:02:47.872340 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 13 00:02:47.872347 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 13 00:02:47.872354 kernel: GICv3: 256 SPIs implemented Sep 13 00:02:47.872360 kernel: GICv3: 0 Extended SPIs implemented Sep 13 00:02:47.872367 kernel: Root IRQ handler: gic_handle_irq Sep 13 00:02:47.872374 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 13 00:02:47.872380 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 13 00:02:47.872387 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 13 00:02:47.872394 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Sep 13 00:02:47.872401 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Sep 13 00:02:47.872408 kernel: GICv3: using LPI property table @0x00000001000e0000 Sep 13 00:02:47.872415 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Sep 13 00:02:47.872423 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 00:02:47.872430 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 00:02:47.872437 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 13 00:02:47.872444 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 13 00:02:47.872451 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 13 00:02:47.872467 kernel: Console: colour dummy device 80x25 Sep 13 00:02:47.872474 kernel: ACPI: Core revision 20230628 Sep 13 00:02:47.872482 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 13 00:02:47.872489 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:02:47.872496 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 13 00:02:47.872505 kernel: landlock: Up and running. Sep 13 00:02:47.872512 kernel: SELinux: Initializing. Sep 13 00:02:47.872519 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:02:47.872526 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:02:47.872533 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:02:47.872663 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:02:47.872671 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:02:47.872678 kernel: rcu: Max phase no-delay instances is 400. Sep 13 00:02:47.872685 kernel: Platform MSI: ITS@0x8080000 domain created Sep 13 00:02:47.872696 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 13 00:02:47.872702 kernel: Remapping and enabling EFI services. Sep 13 00:02:47.872710 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:02:47.872717 kernel: Detected PIPT I-cache on CPU1 Sep 13 00:02:47.872724 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 13 00:02:47.872731 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Sep 13 00:02:47.872738 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 00:02:47.872745 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 13 00:02:47.872752 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 00:02:47.872758 kernel: SMP: Total of 2 processors activated. Sep 13 00:02:47.872767 kernel: CPU features: detected: 32-bit EL0 Support Sep 13 00:02:47.872775 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 13 00:02:47.872787 kernel: CPU features: detected: Common not Private translations Sep 13 00:02:47.872796 kernel: CPU features: detected: CRC32 instructions Sep 13 00:02:47.872803 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 13 00:02:47.872811 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 13 00:02:47.872818 kernel: CPU features: detected: LSE atomic instructions Sep 13 00:02:47.872825 kernel: CPU features: detected: Privileged Access Never Sep 13 00:02:47.872834 kernel: CPU features: detected: RAS Extension Support Sep 13 00:02:47.872843 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 13 00:02:47.872850 kernel: CPU: All CPU(s) started at EL1 Sep 13 00:02:47.872858 kernel: alternatives: applying system-wide alternatives Sep 13 00:02:47.872865 kernel: devtmpfs: initialized Sep 13 00:02:47.872872 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:02:47.872880 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 13 00:02:47.872887 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:02:47.872897 kernel: SMBIOS 3.0.0 present. Sep 13 00:02:47.872904 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Sep 13 00:02:47.872911 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:02:47.872919 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 13 00:02:47.872926 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 13 00:02:47.872934 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 13 00:02:47.872941 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:02:47.872948 kernel: audit: type=2000 audit(0.012:1): state=initialized audit_enabled=0 res=1 Sep 13 00:02:47.872956 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:02:47.872965 kernel: cpuidle: using governor menu Sep 13 00:02:47.872973 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 13 00:02:47.872980 kernel: ASID allocator initialised with 32768 entries Sep 13 00:02:47.872987 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:02:47.872995 kernel: Serial: AMBA PL011 UART driver Sep 13 00:02:47.873002 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 13 00:02:47.873010 kernel: Modules: 0 pages in range for non-PLT usage Sep 13 00:02:47.873017 kernel: Modules: 508992 pages in range for PLT usage Sep 13 00:02:47.873024 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:02:47.873034 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 00:02:47.873041 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 13 00:02:47.873049 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 13 00:02:47.873056 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:02:47.873063 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 00:02:47.873071 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 13 00:02:47.873078 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 13 00:02:47.873085 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:02:47.873093 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:02:47.873101 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:02:47.873109 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:02:47.873116 kernel: ACPI: Interpreter enabled Sep 13 00:02:47.873123 kernel: ACPI: Using GIC for interrupt routing Sep 13 00:02:47.873130 kernel: ACPI: MCFG table detected, 1 entries Sep 13 00:02:47.873138 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 13 00:02:47.873145 kernel: printk: console [ttyAMA0] enabled Sep 13 00:02:47.873153 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:02:47.873324 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:02:47.873420 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 13 00:02:47.873503 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 13 00:02:47.873643 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 13 00:02:47.873725 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 13 00:02:47.873735 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 13 00:02:47.873743 kernel: PCI host bridge to bus 0000:00 Sep 13 00:02:47.873813 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 13 00:02:47.873887 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 13 00:02:47.873967 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 13 00:02:47.874041 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:02:47.874225 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 13 00:02:47.874389 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Sep 13 00:02:47.874474 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Sep 13 00:02:47.874591 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Sep 13 00:02:47.874683 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Sep 13 00:02:47.874766 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Sep 13 00:02:47.874860 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Sep 13 00:02:47.875019 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Sep 13 00:02:47.875121 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Sep 13 00:02:47.875205 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Sep 13 00:02:47.875305 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Sep 13 00:02:47.875393 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Sep 13 00:02:47.875485 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Sep 13 00:02:47.875574 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Sep 13 00:02:47.875669 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Sep 13 00:02:47.875755 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Sep 13 00:02:47.875836 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Sep 13 00:02:47.875911 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Sep 13 00:02:47.876000 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Sep 13 00:02:47.876083 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Sep 13 00:02:47.876172 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Sep 13 00:02:47.876254 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Sep 13 00:02:47.876359 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Sep 13 00:02:47.876444 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Sep 13 00:02:47.876534 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Sep 13 00:02:47.876641 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Sep 13 00:02:47.876729 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 13 00:02:47.876815 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Sep 13 00:02:47.876912 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Sep 13 00:02:47.876996 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Sep 13 00:02:47.877076 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Sep 13 00:02:47.877158 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Sep 13 00:02:47.877307 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Sep 13 00:02:47.877406 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Sep 13 00:02:47.877494 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Sep 13 00:02:47.877622 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Sep 13 00:02:47.877709 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Sep 13 00:02:47.877796 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Sep 13 00:02:47.877881 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Sep 13 00:02:47.877959 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Sep 13 00:02:47.878055 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Sep 13 00:02:47.878140 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Sep 13 00:02:47.878209 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Sep 13 00:02:47.878294 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Sep 13 00:02:47.878370 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Sep 13 00:02:47.878436 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Sep 13 00:02:47.878505 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Sep 13 00:02:47.878596 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Sep 13 00:02:47.878667 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Sep 13 00:02:47.878734 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Sep 13 00:02:47.878810 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Sep 13 00:02:47.878876 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Sep 13 00:02:47.878942 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Sep 13 00:02:47.879011 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Sep 13 00:02:47.879077 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Sep 13 00:02:47.879147 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Sep 13 00:02:47.879217 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Sep 13 00:02:47.879295 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Sep 13 00:02:47.879362 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Sep 13 00:02:47.879686 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Sep 13 00:02:47.879787 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Sep 13 00:02:47.879854 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Sep 13 00:02:47.879931 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 13 00:02:47.879997 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Sep 13 00:02:47.880061 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Sep 13 00:02:47.880130 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 13 00:02:47.880198 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Sep 13 00:02:47.880321 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Sep 13 00:02:47.880398 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 13 00:02:47.880465 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Sep 13 00:02:47.880535 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Sep 13 00:02:47.880625 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Sep 13 00:02:47.880692 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Sep 13 00:02:47.880759 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Sep 13 00:02:47.880851 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Sep 13 00:02:47.880925 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Sep 13 00:02:47.880992 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Sep 13 00:02:47.881064 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Sep 13 00:02:47.881130 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Sep 13 00:02:47.881197 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Sep 13 00:02:47.881284 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Sep 13 00:02:47.881354 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Sep 13 00:02:47.881424 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 13 00:02:47.881495 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Sep 13 00:02:47.881888 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 13 00:02:47.881970 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Sep 13 00:02:47.882035 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 13 00:02:47.882103 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Sep 13 00:02:47.882167 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Sep 13 00:02:47.882253 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Sep 13 00:02:47.882335 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Sep 13 00:02:47.882402 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Sep 13 00:02:47.882469 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Sep 13 00:02:47.882537 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Sep 13 00:02:47.882640 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Sep 13 00:02:47.882711 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Sep 13 00:02:47.882777 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Sep 13 00:02:47.882845 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Sep 13 00:02:47.882918 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Sep 13 00:02:47.882987 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Sep 13 00:02:47.883054 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Sep 13 00:02:47.883121 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Sep 13 00:02:47.883187 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Sep 13 00:02:47.883305 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Sep 13 00:02:47.883379 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Sep 13 00:02:47.883447 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Sep 13 00:02:47.883519 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Sep 13 00:02:47.884093 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Sep 13 00:02:47.884185 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Sep 13 00:02:47.884339 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Sep 13 00:02:47.884428 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Sep 13 00:02:47.884498 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 13 00:02:47.884620 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Sep 13 00:02:47.884696 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Sep 13 00:02:47.884775 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Sep 13 00:02:47.884841 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Sep 13 00:02:47.884907 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Sep 13 00:02:47.884981 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Sep 13 00:02:47.885051 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Sep 13 00:02:47.885120 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Sep 13 00:02:47.885187 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Sep 13 00:02:47.885373 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Sep 13 00:02:47.885470 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Sep 13 00:02:47.886688 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Sep 13 00:02:47.886803 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Sep 13 00:02:47.886874 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Sep 13 00:02:47.886947 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Sep 13 00:02:47.887014 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Sep 13 00:02:47.887089 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Sep 13 00:02:47.887160 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Sep 13 00:02:47.887227 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Sep 13 00:02:47.887343 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Sep 13 00:02:47.887411 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Sep 13 00:02:47.887488 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Sep 13 00:02:47.887588 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Sep 13 00:02:47.887663 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Sep 13 00:02:47.887731 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Sep 13 00:02:47.887797 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Sep 13 00:02:47.887952 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Sep 13 00:02:47.888041 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Sep 13 00:02:47.888114 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Sep 13 00:02:47.888184 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Sep 13 00:02:47.888280 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Sep 13 00:02:47.888352 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 13 00:02:47.888431 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Sep 13 00:02:47.888501 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Sep 13 00:02:47.889669 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Sep 13 00:02:47.889763 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Sep 13 00:02:47.889836 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Sep 13 00:02:47.889903 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Sep 13 00:02:47.889977 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 13 00:02:47.890048 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Sep 13 00:02:47.890115 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Sep 13 00:02:47.890182 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Sep 13 00:02:47.890306 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 13 00:02:47.890386 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Sep 13 00:02:47.890455 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Sep 13 00:02:47.890521 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Sep 13 00:02:47.890608 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Sep 13 00:02:47.890686 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 13 00:02:47.890748 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 13 00:02:47.890806 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 13 00:02:47.890886 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Sep 13 00:02:47.890950 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Sep 13 00:02:47.891022 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Sep 13 00:02:47.891098 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Sep 13 00:02:47.891248 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Sep 13 00:02:47.891403 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Sep 13 00:02:47.891533 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Sep 13 00:02:47.892812 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Sep 13 00:02:47.892876 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Sep 13 00:02:47.892956 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Sep 13 00:02:47.893017 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Sep 13 00:02:47.893081 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Sep 13 00:02:47.893161 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Sep 13 00:02:47.893224 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Sep 13 00:02:47.893314 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Sep 13 00:02:47.893387 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Sep 13 00:02:47.893455 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Sep 13 00:02:47.893518 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 13 00:02:47.893616 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Sep 13 00:02:47.893683 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Sep 13 00:02:47.893751 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 13 00:02:47.893820 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Sep 13 00:02:47.893884 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Sep 13 00:02:47.893946 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 13 00:02:47.894020 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Sep 13 00:02:47.894082 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Sep 13 00:02:47.894374 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Sep 13 00:02:47.894393 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 13 00:02:47.894401 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 13 00:02:47.894409 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 13 00:02:47.894417 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 13 00:02:47.894425 kernel: iommu: Default domain type: Translated Sep 13 00:02:47.894433 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 13 00:02:47.895850 kernel: efivars: Registered efivars operations Sep 13 00:02:47.895868 kernel: vgaarb: loaded Sep 13 00:02:47.895877 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 13 00:02:47.895893 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:02:47.895902 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:02:47.895909 kernel: pnp: PnP ACPI init Sep 13 00:02:47.896062 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 13 00:02:47.896078 kernel: pnp: PnP ACPI: found 1 devices Sep 13 00:02:47.896087 kernel: NET: Registered PF_INET protocol family Sep 13 00:02:47.896097 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:02:47.896106 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 13 00:02:47.896117 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:02:47.896125 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:02:47.896133 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 13 00:02:47.896141 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 13 00:02:47.896149 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:02:47.896157 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:02:47.896165 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:02:47.896266 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Sep 13 00:02:47.896283 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:02:47.896295 kernel: kvm [1]: HYP mode not available Sep 13 00:02:47.896303 kernel: Initialise system trusted keyrings Sep 13 00:02:47.896311 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 13 00:02:47.896319 kernel: Key type asymmetric registered Sep 13 00:02:47.896327 kernel: Asymmetric key parser 'x509' registered Sep 13 00:02:47.896335 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 13 00:02:47.896343 kernel: io scheduler mq-deadline registered Sep 13 00:02:47.896350 kernel: io scheduler kyber registered Sep 13 00:02:47.896359 kernel: io scheduler bfq registered Sep 13 00:02:47.896369 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 13 00:02:47.896448 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Sep 13 00:02:47.896520 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Sep 13 00:02:47.896630 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 00:02:47.896707 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Sep 13 00:02:47.896778 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Sep 13 00:02:47.896866 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 00:02:47.897027 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Sep 13 00:02:47.897117 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Sep 13 00:02:47.897185 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 00:02:47.897346 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Sep 13 00:02:47.897442 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Sep 13 00:02:47.897517 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 00:02:47.898631 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Sep 13 00:02:47.898735 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Sep 13 00:02:47.898809 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 00:02:47.898894 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Sep 13 00:02:47.898974 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Sep 13 00:02:47.899064 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 00:02:47.899152 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Sep 13 00:02:47.899228 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Sep 13 00:02:47.899341 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 00:02:47.899417 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Sep 13 00:02:47.899490 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Sep 13 00:02:47.899593 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 00:02:47.899607 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Sep 13 00:02:47.899676 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Sep 13 00:02:47.899746 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Sep 13 00:02:47.899811 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 00:02:47.899821 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 13 00:02:47.899830 kernel: ACPI: button: Power Button [PWRB] Sep 13 00:02:47.899840 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 13 00:02:47.899912 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Sep 13 00:02:47.899989 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Sep 13 00:02:47.899999 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:02:47.900007 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 13 00:02:47.900075 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Sep 13 00:02:47.900086 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Sep 13 00:02:47.900094 kernel: thunder_xcv, ver 1.0 Sep 13 00:02:47.900102 kernel: thunder_bgx, ver 1.0 Sep 13 00:02:47.900112 kernel: nicpf, ver 1.0 Sep 13 00:02:47.900120 kernel: nicvf, ver 1.0 Sep 13 00:02:47.900200 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 13 00:02:47.900312 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-13T00:02:47 UTC (1757721767) Sep 13 00:02:47.900325 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 13 00:02:47.900333 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 13 00:02:47.900341 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 13 00:02:47.900349 kernel: watchdog: Hard watchdog permanently disabled Sep 13 00:02:47.900361 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:02:47.900369 kernel: Segment Routing with IPv6 Sep 13 00:02:47.900377 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:02:47.900385 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:02:47.900392 kernel: Key type dns_resolver registered Sep 13 00:02:47.900400 kernel: registered taskstats version 1 Sep 13 00:02:47.900408 kernel: Loading compiled-in X.509 certificates Sep 13 00:02:47.900416 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 036ad4721a31543be5c000f2896b40d1e5515c6e' Sep 13 00:02:47.900423 kernel: Key type .fscrypt registered Sep 13 00:02:47.900432 kernel: Key type fscrypt-provisioning registered Sep 13 00:02:47.900440 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:02:47.900448 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:02:47.900456 kernel: ima: No architecture policies found Sep 13 00:02:47.900464 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 13 00:02:47.900471 kernel: clk: Disabling unused clocks Sep 13 00:02:47.900479 kernel: Freeing unused kernel memory: 39488K Sep 13 00:02:47.900487 kernel: Run /init as init process Sep 13 00:02:47.900495 kernel: with arguments: Sep 13 00:02:47.900504 kernel: /init Sep 13 00:02:47.900512 kernel: with environment: Sep 13 00:02:47.900519 kernel: HOME=/ Sep 13 00:02:47.900527 kernel: TERM=linux Sep 13 00:02:47.900534 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:02:47.900634 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:02:47.900646 systemd[1]: Detected virtualization kvm. Sep 13 00:02:47.900655 systemd[1]: Detected architecture arm64. Sep 13 00:02:47.900667 systemd[1]: Running in initrd. Sep 13 00:02:47.900675 systemd[1]: No hostname configured, using default hostname. Sep 13 00:02:47.900683 systemd[1]: Hostname set to . Sep 13 00:02:47.900692 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:02:47.900700 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:02:47.900709 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:02:47.900717 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:02:47.900727 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 00:02:47.900737 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:02:47.900746 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 00:02:47.900755 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 00:02:47.900765 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 00:02:47.900775 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 00:02:47.900784 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:02:47.900792 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:02:47.900803 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:02:47.900811 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:02:47.900820 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:02:47.900828 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:02:47.900837 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:02:47.900845 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:02:47.900854 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:02:47.900862 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:02:47.900872 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:02:47.900881 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:02:47.900889 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:02:47.900897 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:02:47.900906 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 00:02:47.900914 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:02:47.900923 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 00:02:47.900931 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:02:47.900939 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:02:47.900949 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:02:47.900958 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:02:47.900996 systemd-journald[236]: Collecting audit messages is disabled. Sep 13 00:02:47.901017 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 00:02:47.901028 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:02:47.901037 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:02:47.901046 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:02:47.901055 systemd-journald[236]: Journal started Sep 13 00:02:47.901077 systemd-journald[236]: Runtime Journal (/run/log/journal/8ef12b5113914c74b52f8b69cea9fb2b) is 8.0M, max 76.6M, 68.6M free. Sep 13 00:02:47.903893 systemd-modules-load[237]: Inserted module 'overlay' Sep 13 00:02:47.909336 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:02:47.912849 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:02:47.916757 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:02:47.925576 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:02:47.925637 kernel: Bridge firewalling registered Sep 13 00:02:47.925286 systemd-modules-load[237]: Inserted module 'br_netfilter' Sep 13 00:02:47.931840 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:02:47.934764 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:02:47.941528 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:02:47.942534 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:02:47.948272 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:02:47.955699 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:02:47.965749 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:02:47.968489 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:02:47.975791 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 00:02:47.981173 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:02:47.989720 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:02:47.996471 dracut-cmdline[268]: dracut-dracut-053 Sep 13 00:02:48.002185 dracut-cmdline[268]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=e1b46f3c9e154636c32f6cde6e746a00a6b37ca7432cb4e16d172c05f584a8c9 Sep 13 00:02:48.032524 systemd-resolved[274]: Positive Trust Anchors: Sep 13 00:02:48.032556 systemd-resolved[274]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:02:48.032588 systemd-resolved[274]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:02:48.038865 systemd-resolved[274]: Defaulting to hostname 'linux'. Sep 13 00:02:48.040080 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:02:48.041313 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:02:48.106652 kernel: SCSI subsystem initialized Sep 13 00:02:48.111592 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:02:48.119597 kernel: iscsi: registered transport (tcp) Sep 13 00:02:48.133589 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:02:48.133657 kernel: QLogic iSCSI HBA Driver Sep 13 00:02:48.179610 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 00:02:48.187842 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 00:02:48.208604 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:02:48.208706 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:02:48.208747 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 13 00:02:48.258636 kernel: raid6: neonx8 gen() 15659 MB/s Sep 13 00:02:48.275609 kernel: raid6: neonx4 gen() 15597 MB/s Sep 13 00:02:48.292600 kernel: raid6: neonx2 gen() 13145 MB/s Sep 13 00:02:48.309606 kernel: raid6: neonx1 gen() 10450 MB/s Sep 13 00:02:48.326664 kernel: raid6: int64x8 gen() 6916 MB/s Sep 13 00:02:48.343639 kernel: raid6: int64x4 gen() 7316 MB/s Sep 13 00:02:48.360580 kernel: raid6: int64x2 gen() 6095 MB/s Sep 13 00:02:48.377610 kernel: raid6: int64x1 gen() 5041 MB/s Sep 13 00:02:48.377686 kernel: raid6: using algorithm neonx8 gen() 15659 MB/s Sep 13 00:02:48.394586 kernel: raid6: .... xor() 12002 MB/s, rmw enabled Sep 13 00:02:48.394631 kernel: raid6: using neon recovery algorithm Sep 13 00:02:48.399841 kernel: xor: measuring software checksum speed Sep 13 00:02:48.399907 kernel: 8regs : 19821 MB/sec Sep 13 00:02:48.399931 kernel: 32regs : 19249 MB/sec Sep 13 00:02:48.399966 kernel: arm64_neon : 26919 MB/sec Sep 13 00:02:48.400592 kernel: xor: using function: arm64_neon (26919 MB/sec) Sep 13 00:02:48.451691 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 00:02:48.465613 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:02:48.472804 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:02:48.492791 systemd-udevd[454]: Using default interface naming scheme 'v255'. Sep 13 00:02:48.496581 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:02:48.506864 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 00:02:48.524285 dracut-pre-trigger[460]: rd.md=0: removing MD RAID activation Sep 13 00:02:48.567322 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:02:48.572878 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:02:48.626728 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:02:48.633752 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 00:02:48.665958 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 00:02:48.668206 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:02:48.669712 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:02:48.670384 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:02:48.681852 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 00:02:48.699089 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:02:48.724586 kernel: scsi host0: Virtio SCSI HBA Sep 13 00:02:48.738261 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 13 00:02:48.738361 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Sep 13 00:02:48.762614 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:02:48.762743 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:02:48.766307 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:02:48.767328 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:02:48.767497 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:02:48.768623 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:02:48.782940 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:02:48.789198 kernel: ACPI: bus type USB registered Sep 13 00:02:48.789278 kernel: usbcore: registered new interface driver usbfs Sep 13 00:02:48.789292 kernel: usbcore: registered new interface driver hub Sep 13 00:02:48.789302 kernel: usbcore: registered new device driver usb Sep 13 00:02:48.795967 kernel: sr 0:0:0:0: Power-on or device reset occurred Sep 13 00:02:48.803629 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Sep 13 00:02:48.803822 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 00:02:48.808579 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Sep 13 00:02:48.810666 kernel: sd 0:0:0:1: Power-on or device reset occurred Sep 13 00:02:48.810860 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Sep 13 00:02:48.811526 kernel: sd 0:0:0:1: [sda] Write Protect is off Sep 13 00:02:48.811771 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Sep 13 00:02:48.811867 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 13 00:02:48.815902 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:02:48.819354 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:02:48.819380 kernel: GPT:17805311 != 80003071 Sep 13 00:02:48.819390 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:02:48.821760 kernel: GPT:17805311 != 80003071 Sep 13 00:02:48.821880 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:02:48.821896 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:02:48.824638 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Sep 13 00:02:48.825774 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:02:48.842790 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 13 00:02:48.843029 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Sep 13 00:02:48.844585 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Sep 13 00:02:48.847575 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 13 00:02:48.847800 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Sep 13 00:02:48.847886 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Sep 13 00:02:48.850047 kernel: hub 1-0:1.0: USB hub found Sep 13 00:02:48.850291 kernel: hub 1-0:1.0: 4 ports detected Sep 13 00:02:48.850383 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Sep 13 00:02:48.852625 kernel: hub 2-0:1.0: USB hub found Sep 13 00:02:48.853383 kernel: hub 2-0:1.0: 4 ports detected Sep 13 00:02:48.861127 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:02:48.893121 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Sep 13 00:02:48.898580 kernel: BTRFS: device fsid 29bc4da8-c689-46a2-a16a-b7bbc722db77 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (512) Sep 13 00:02:48.900603 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (497) Sep 13 00:02:48.906109 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Sep 13 00:02:48.918165 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Sep 13 00:02:48.919974 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Sep 13 00:02:48.928896 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 13 00:02:48.936768 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 00:02:48.945776 disk-uuid[571]: Primary Header is updated. Sep 13 00:02:48.945776 disk-uuid[571]: Secondary Entries is updated. Sep 13 00:02:48.945776 disk-uuid[571]: Secondary Header is updated. Sep 13 00:02:48.960581 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:02:48.965589 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:02:49.088580 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Sep 13 00:02:49.226580 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Sep 13 00:02:49.226680 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Sep 13 00:02:49.227810 kernel: usbcore: registered new interface driver usbhid Sep 13 00:02:49.228556 kernel: usbhid: USB HID core driver Sep 13 00:02:49.334611 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Sep 13 00:02:49.470588 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Sep 13 00:02:49.526469 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Sep 13 00:02:49.969629 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:02:49.969705 disk-uuid[572]: The operation has completed successfully. Sep 13 00:02:50.030261 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:02:50.031597 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 00:02:50.047892 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 00:02:50.055203 sh[586]: Success Sep 13 00:02:50.068564 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 13 00:02:50.128604 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 00:02:50.138670 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 00:02:50.140614 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 00:02:50.170048 kernel: BTRFS info (device dm-0): first mount of filesystem 29bc4da8-c689-46a2-a16a-b7bbc722db77 Sep 13 00:02:50.170129 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:02:50.170148 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 13 00:02:50.170594 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 00:02:50.171625 kernel: BTRFS info (device dm-0): using free space tree Sep 13 00:02:50.178737 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 13 00:02:50.181425 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 00:02:50.183286 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 00:02:50.189860 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 00:02:50.193312 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 00:02:50.209427 kernel: BTRFS info (device sda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:02:50.209486 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:02:50.209503 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:02:50.215592 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:02:50.215655 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:02:50.229627 kernel: BTRFS info (device sda6): last unmount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:02:50.230053 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:02:50.238033 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 00:02:50.246824 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 00:02:50.338093 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:02:50.346122 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:02:50.350259 ignition[681]: Ignition 2.19.0 Sep 13 00:02:50.350273 ignition[681]: Stage: fetch-offline Sep 13 00:02:50.352322 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:02:50.350313 ignition[681]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:02:50.350322 ignition[681]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:02:50.350479 ignition[681]: parsed url from cmdline: "" Sep 13 00:02:50.350483 ignition[681]: no config URL provided Sep 13 00:02:50.350487 ignition[681]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:02:50.350494 ignition[681]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:02:50.350499 ignition[681]: failed to fetch config: resource requires networking Sep 13 00:02:50.350713 ignition[681]: Ignition finished successfully Sep 13 00:02:50.374631 systemd-networkd[773]: lo: Link UP Sep 13 00:02:50.374644 systemd-networkd[773]: lo: Gained carrier Sep 13 00:02:50.376640 systemd-networkd[773]: Enumeration completed Sep 13 00:02:50.376821 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:02:50.377649 systemd[1]: Reached target network.target - Network. Sep 13 00:02:50.379384 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:02:50.379387 systemd-networkd[773]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:02:50.380224 systemd-networkd[773]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:02:50.380228 systemd-networkd[773]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:02:50.380799 systemd-networkd[773]: eth0: Link UP Sep 13 00:02:50.380802 systemd-networkd[773]: eth0: Gained carrier Sep 13 00:02:50.380810 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:02:50.384951 systemd-networkd[773]: eth1: Link UP Sep 13 00:02:50.384956 systemd-networkd[773]: eth1: Gained carrier Sep 13 00:02:50.384967 systemd-networkd[773]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:02:50.389745 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 13 00:02:50.404863 ignition[776]: Ignition 2.19.0 Sep 13 00:02:50.404873 ignition[776]: Stage: fetch Sep 13 00:02:50.405057 ignition[776]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:02:50.405068 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:02:50.405154 ignition[776]: parsed url from cmdline: "" Sep 13 00:02:50.405158 ignition[776]: no config URL provided Sep 13 00:02:50.405163 ignition[776]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:02:50.405170 ignition[776]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:02:50.405236 ignition[776]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Sep 13 00:02:50.405906 ignition[776]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Sep 13 00:02:50.418652 systemd-networkd[773]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 13 00:02:50.452637 systemd-networkd[773]: eth0: DHCPv4 address 49.13.17.32/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 13 00:02:50.606727 ignition[776]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Sep 13 00:02:50.615171 ignition[776]: GET result: OK Sep 13 00:02:50.615317 ignition[776]: parsing config with SHA512: f285b2d8816f02c2b72b891b111ec465bcbc03c51ebcb5dc261bbe130b068132a7faf7dd35c0a679ab24f85d608890fbd87cc6af66a4cc6d4bf69c85c7f2046a Sep 13 00:02:50.620608 unknown[776]: fetched base config from "system" Sep 13 00:02:50.621038 ignition[776]: fetch: fetch complete Sep 13 00:02:50.620619 unknown[776]: fetched base config from "system" Sep 13 00:02:50.621045 ignition[776]: fetch: fetch passed Sep 13 00:02:50.620626 unknown[776]: fetched user config from "hetzner" Sep 13 00:02:50.621095 ignition[776]: Ignition finished successfully Sep 13 00:02:50.623116 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 13 00:02:50.631956 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 00:02:50.646688 ignition[783]: Ignition 2.19.0 Sep 13 00:02:50.646699 ignition[783]: Stage: kargs Sep 13 00:02:50.646903 ignition[783]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:02:50.646914 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:02:50.651724 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 00:02:50.647992 ignition[783]: kargs: kargs passed Sep 13 00:02:50.648050 ignition[783]: Ignition finished successfully Sep 13 00:02:50.657952 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 00:02:50.675300 ignition[790]: Ignition 2.19.0 Sep 13 00:02:50.675313 ignition[790]: Stage: disks Sep 13 00:02:50.675515 ignition[790]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:02:50.675525 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:02:50.676586 ignition[790]: disks: disks passed Sep 13 00:02:50.676662 ignition[790]: Ignition finished successfully Sep 13 00:02:50.679041 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 00:02:50.680759 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 00:02:50.681642 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:02:50.682720 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:02:50.683865 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:02:50.685294 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:02:50.691835 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 00:02:50.708659 systemd-fsck[798]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 13 00:02:50.713325 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 00:02:50.721787 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 00:02:50.779590 kernel: EXT4-fs (sda9): mounted filesystem d35fd879-6758-447b-9fdd-bb21dd7c5b2b r/w with ordered data mode. Quota mode: none. Sep 13 00:02:50.781491 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 00:02:50.784024 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 00:02:50.794726 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:02:50.798822 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 00:02:50.803820 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 13 00:02:50.806215 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:02:50.807613 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:02:50.812040 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (806) Sep 13 00:02:50.813173 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 00:02:50.819315 kernel: BTRFS info (device sda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:02:50.819344 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:02:50.819357 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:02:50.820799 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 00:02:50.829331 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:02:50.829375 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:02:50.832173 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:02:50.888141 coreos-metadata[808]: Sep 13 00:02:50.887 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Sep 13 00:02:50.890156 coreos-metadata[808]: Sep 13 00:02:50.890 INFO Fetch successful Sep 13 00:02:50.892706 coreos-metadata[808]: Sep 13 00:02:50.891 INFO wrote hostname ci-4081-3-5-n-03d8b9aea3 to /sysroot/etc/hostname Sep 13 00:02:50.894703 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:02:50.896094 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 13 00:02:50.906047 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:02:50.911469 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:02:50.916273 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:02:51.015655 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 00:02:51.023733 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 00:02:51.027753 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 00:02:51.035565 kernel: BTRFS info (device sda6): last unmount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:02:51.071023 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 00:02:51.073137 ignition[924]: INFO : Ignition 2.19.0 Sep 13 00:02:51.073137 ignition[924]: INFO : Stage: mount Sep 13 00:02:51.074110 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:02:51.074110 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:02:51.075741 ignition[924]: INFO : mount: mount passed Sep 13 00:02:51.075741 ignition[924]: INFO : Ignition finished successfully Sep 13 00:02:51.076495 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 00:02:51.084791 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 00:02:51.171401 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 00:02:51.179336 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:02:51.188569 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (935) Sep 13 00:02:51.190088 kernel: BTRFS info (device sda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:02:51.190130 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:02:51.190563 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:02:51.193677 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:02:51.193735 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:02:51.196791 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:02:51.219898 ignition[952]: INFO : Ignition 2.19.0 Sep 13 00:02:51.220979 ignition[952]: INFO : Stage: files Sep 13 00:02:51.221409 ignition[952]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:02:51.221409 ignition[952]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:02:51.222867 ignition[952]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:02:51.224313 ignition[952]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:02:51.224313 ignition[952]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:02:51.227946 ignition[952]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:02:51.228900 ignition[952]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:02:51.230211 unknown[952]: wrote ssh authorized keys file for user: core Sep 13 00:02:51.232144 ignition[952]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:02:51.233986 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 13 00:02:51.233986 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 13 00:02:51.326894 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 00:02:51.551936 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 13 00:02:51.551936 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 00:02:51.551936 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 13 00:02:51.767112 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 13 00:02:51.844350 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 00:02:51.844350 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:02:51.846853 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:02:51.846853 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:02:51.846853 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:02:51.846853 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:02:51.846853 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:02:51.846853 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:02:51.846853 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:02:51.846853 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:02:51.846853 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:02:51.846853 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 13 00:02:51.846853 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 13 00:02:51.846853 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 13 00:02:51.846853 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 13 00:02:51.914632 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 13 00:02:52.089311 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 13 00:02:52.090733 ignition[952]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 13 00:02:52.091988 ignition[952]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:02:52.093242 ignition[952]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:02:52.093242 ignition[952]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 13 00:02:52.093242 ignition[952]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 13 00:02:52.093242 ignition[952]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 13 00:02:52.093242 ignition[952]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 13 00:02:52.093242 ignition[952]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 13 00:02:52.093242 ignition[952]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:02:52.093242 ignition[952]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:02:52.103237 ignition[952]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:02:52.103237 ignition[952]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:02:52.103237 ignition[952]: INFO : files: files passed Sep 13 00:02:52.103237 ignition[952]: INFO : Ignition finished successfully Sep 13 00:02:52.094987 systemd-networkd[773]: eth1: Gained IPv6LL Sep 13 00:02:52.096297 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 00:02:52.103754 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 00:02:52.109485 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 00:02:52.114903 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:02:52.115016 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 00:02:52.126339 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:02:52.126339 initrd-setup-root-after-ignition[981]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:02:52.128954 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:02:52.133599 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:02:52.134621 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 00:02:52.141876 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 00:02:52.178234 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:02:52.178452 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 00:02:52.181503 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 00:02:52.182848 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 00:02:52.184681 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 00:02:52.191926 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 00:02:52.208753 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:02:52.216851 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 00:02:52.230992 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:02:52.233071 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:02:52.234363 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 00:02:52.235979 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:02:52.236288 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:02:52.238138 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 00:02:52.239074 systemd[1]: Stopped target basic.target - Basic System. Sep 13 00:02:52.240107 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 00:02:52.241142 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:02:52.242683 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 00:02:52.243462 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 00:02:52.244522 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:02:52.245668 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 00:02:52.246884 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 00:02:52.247880 systemd[1]: Stopped target swap.target - Swaps. Sep 13 00:02:52.248618 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:02:52.248795 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:02:52.250136 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:02:52.251359 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:02:52.252368 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 00:02:52.252848 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:02:52.253712 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:02:52.253910 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 00:02:52.255832 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:02:52.256073 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:02:52.257231 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:02:52.257421 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 00:02:52.258316 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 13 00:02:52.258471 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 13 00:02:52.271866 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 00:02:52.277787 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 00:02:52.278480 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:02:52.280820 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:02:52.285791 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:02:52.285919 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:02:52.295382 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:02:52.295526 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 00:02:52.301854 ignition[1005]: INFO : Ignition 2.19.0 Sep 13 00:02:52.303630 ignition[1005]: INFO : Stage: umount Sep 13 00:02:52.303630 ignition[1005]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:02:52.303630 ignition[1005]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:02:52.307439 ignition[1005]: INFO : umount: umount passed Sep 13 00:02:52.307439 ignition[1005]: INFO : Ignition finished successfully Sep 13 00:02:52.305652 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:02:52.307175 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:02:52.307584 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 00:02:52.308425 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:02:52.308610 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 00:02:52.311763 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:02:52.311823 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 00:02:52.312635 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 13 00:02:52.312682 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 13 00:02:52.314282 systemd[1]: Stopped target network.target - Network. Sep 13 00:02:52.315188 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:02:52.315255 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:02:52.316898 systemd[1]: Stopped target paths.target - Path Units. Sep 13 00:02:52.317905 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:02:52.321636 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:02:52.323595 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 00:02:52.324346 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 00:02:52.325958 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:02:52.326039 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:02:52.327019 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:02:52.327061 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:02:52.328090 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:02:52.328147 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 00:02:52.329127 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 00:02:52.329184 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 00:02:52.330382 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 00:02:52.331481 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 00:02:52.333007 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:02:52.333130 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 00:02:52.334513 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:02:52.334643 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 00:02:52.336791 systemd-networkd[773]: eth1: DHCPv6 lease lost Sep 13 00:02:52.337037 systemd-networkd[773]: eth0: DHCPv6 lease lost Sep 13 00:02:52.339655 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:02:52.339807 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 00:02:52.341107 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:02:52.341241 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:02:52.349074 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 00:02:52.349660 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:02:52.349736 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:02:52.350592 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:02:52.351578 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:02:52.351785 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 00:02:52.380007 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:02:52.380450 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:02:52.382532 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:02:52.382654 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 00:02:52.384852 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:02:52.384925 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 00:02:52.386449 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:02:52.386485 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:02:52.387573 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:02:52.387628 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:02:52.389273 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:02:52.389326 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 00:02:52.390959 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:02:52.391012 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:02:52.398828 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 00:02:52.403947 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:02:52.404654 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:02:52.405967 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:02:52.406703 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 00:02:52.407462 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:02:52.407531 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:02:52.409083 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 13 00:02:52.409143 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:02:52.411063 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:02:52.411126 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:02:52.412387 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 00:02:52.412441 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:02:52.413622 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:02:52.413665 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:02:52.415690 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:02:52.415797 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 00:02:52.417627 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 00:02:52.426078 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 00:02:52.436401 systemd[1]: Switching root. Sep 13 00:02:52.467883 systemd-journald[236]: Journal stopped Sep 13 00:02:53.429887 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Sep 13 00:02:53.431660 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:02:53.431676 kernel: SELinux: policy capability open_perms=1 Sep 13 00:02:53.431692 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:02:53.431703 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:02:53.431712 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:02:53.431722 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:02:53.431732 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:02:53.431745 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:02:53.431755 kernel: audit: type=1403 audit(1757721772.648:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:02:53.431767 systemd[1]: Successfully loaded SELinux policy in 38.392ms. Sep 13 00:02:53.431793 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.938ms. Sep 13 00:02:53.431805 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:02:53.431816 systemd[1]: Detected virtualization kvm. Sep 13 00:02:53.431832 systemd[1]: Detected architecture arm64. Sep 13 00:02:53.431844 systemd[1]: Detected first boot. Sep 13 00:02:53.431859 systemd[1]: Hostname set to . Sep 13 00:02:53.431872 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:02:53.431883 zram_generator::config[1047]: No configuration found. Sep 13 00:02:53.431896 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:02:53.431907 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 00:02:53.431917 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 13 00:02:53.431932 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 00:02:53.431943 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 13 00:02:53.431955 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 13 00:02:53.431968 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 13 00:02:53.431979 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 13 00:02:53.431995 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 13 00:02:53.432006 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 13 00:02:53.432016 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 13 00:02:53.432027 systemd[1]: Created slice user.slice - User and Session Slice. Sep 13 00:02:53.432038 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:02:53.432050 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:02:53.432061 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 13 00:02:53.432074 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 13 00:02:53.432085 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 13 00:02:53.432096 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:02:53.432107 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 13 00:02:53.432118 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:02:53.432129 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 13 00:02:53.432192 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 13 00:02:53.432208 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 13 00:02:53.432220 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 13 00:02:53.432231 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:02:53.432246 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:02:53.432256 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:02:53.432267 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:02:53.432277 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 13 00:02:53.432288 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 13 00:02:53.432301 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:02:53.432312 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:02:53.432323 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:02:53.432334 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 13 00:02:53.432344 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 13 00:02:53.432355 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 13 00:02:53.432367 systemd[1]: Mounting media.mount - External Media Directory... Sep 13 00:02:53.432377 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 13 00:02:53.432387 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 13 00:02:53.432400 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 13 00:02:53.432411 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:02:53.432422 systemd[1]: Reached target machines.target - Containers. Sep 13 00:02:53.432436 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 13 00:02:53.432449 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:02:53.432461 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:02:53.432472 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 13 00:02:53.432483 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:02:53.432493 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:02:53.432504 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:02:53.432516 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 13 00:02:53.432526 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:02:53.432537 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:02:53.433684 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 00:02:53.433698 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 13 00:02:53.433715 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 00:02:53.433726 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 00:02:53.433737 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:02:53.433747 kernel: loop: module loaded Sep 13 00:02:53.433760 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:02:53.433772 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 00:02:53.433782 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 13 00:02:53.433795 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:02:53.433806 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 00:02:53.433817 systemd[1]: Stopped verity-setup.service. Sep 13 00:02:53.433828 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 13 00:02:53.433838 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 13 00:02:53.433849 systemd[1]: Mounted media.mount - External Media Directory. Sep 13 00:02:53.433859 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 13 00:02:53.433870 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 13 00:02:53.433882 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 13 00:02:53.433893 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:02:53.433904 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:02:53.433914 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 13 00:02:53.433926 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:02:53.433937 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:02:53.433949 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:02:53.433960 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:02:53.433973 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:02:53.433983 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:02:53.433995 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:02:53.434006 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 13 00:02:53.434019 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:02:53.434030 kernel: fuse: init (API version 7.39) Sep 13 00:02:53.434040 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:02:53.434051 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:02:53.434098 systemd-journald[1115]: Collecting audit messages is disabled. Sep 13 00:02:53.434124 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 00:02:53.434410 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 13 00:02:53.434439 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 00:02:53.434454 systemd-journald[1115]: Journal started Sep 13 00:02:53.434482 systemd-journald[1115]: Runtime Journal (/run/log/journal/8ef12b5113914c74b52f8b69cea9fb2b) is 8.0M, max 76.6M, 68.6M free. Sep 13 00:02:53.132772 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:02:53.157173 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 13 00:02:53.157763 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 00:02:53.437604 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:02:53.441770 kernel: ACPI: bus type drm_connector registered Sep 13 00:02:53.445724 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 13 00:02:53.451103 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:02:53.452839 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:02:53.460027 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:02:53.460458 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 13 00:02:53.462350 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 13 00:02:53.490850 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 13 00:02:53.491514 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:02:53.493627 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:02:53.495432 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 13 00:02:53.502942 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 13 00:02:53.504995 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 13 00:02:53.507331 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:02:53.510809 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 13 00:02:53.514785 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 13 00:02:53.517657 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:02:53.519752 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 13 00:02:53.522801 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 13 00:02:53.526052 systemd-tmpfiles[1135]: ACLs are not supported, ignoring. Sep 13 00:02:53.527275 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 13 00:02:53.527370 systemd-tmpfiles[1135]: ACLs are not supported, ignoring. Sep 13 00:02:53.532620 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:02:53.538597 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 13 00:02:53.549599 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:02:53.553584 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 13 00:02:53.563587 systemd-journald[1115]: Time spent on flushing to /var/log/journal/8ef12b5113914c74b52f8b69cea9fb2b is 88.923ms for 1133 entries. Sep 13 00:02:53.563587 systemd-journald[1115]: System Journal (/var/log/journal/8ef12b5113914c74b52f8b69cea9fb2b) is 8.0M, max 584.8M, 576.8M free. Sep 13 00:02:53.668660 systemd-journald[1115]: Received client request to flush runtime journal. Sep 13 00:02:53.668770 kernel: loop0: detected capacity change from 0 to 114432 Sep 13 00:02:53.668807 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:02:53.668839 kernel: loop1: detected capacity change from 0 to 211168 Sep 13 00:02:53.583020 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 13 00:02:53.586295 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 13 00:02:53.600756 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 13 00:02:53.604300 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:02:53.615898 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 13 00:02:53.649596 udevadm[1176]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 13 00:02:53.674947 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 13 00:02:53.681769 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:02:53.684654 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 13 00:02:53.692612 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 13 00:02:53.700607 kernel: loop2: detected capacity change from 0 to 114328 Sep 13 00:02:53.699885 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:02:53.735824 kernel: loop3: detected capacity change from 0 to 8 Sep 13 00:02:53.744675 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Sep 13 00:02:53.744697 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Sep 13 00:02:53.757872 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:02:53.765778 kernel: loop4: detected capacity change from 0 to 114432 Sep 13 00:02:53.783580 kernel: loop5: detected capacity change from 0 to 211168 Sep 13 00:02:53.807574 kernel: loop6: detected capacity change from 0 to 114328 Sep 13 00:02:53.830422 (sd-merge)[1189]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Sep 13 00:02:53.830860 kernel: loop7: detected capacity change from 0 to 8 Sep 13 00:02:53.830899 (sd-merge)[1189]: Merged extensions into '/usr'. Sep 13 00:02:53.841690 systemd[1]: Reloading requested from client PID 1166 ('systemd-sysext') (unit systemd-sysext.service)... Sep 13 00:02:53.841714 systemd[1]: Reloading... Sep 13 00:02:53.948570 zram_generator::config[1215]: No configuration found. Sep 13 00:02:54.050934 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:02:54.098354 systemd[1]: Reloading finished in 256 ms. Sep 13 00:02:54.124358 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 13 00:02:54.134901 systemd[1]: Starting ensure-sysext.service... Sep 13 00:02:54.138879 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:02:54.161587 ldconfig[1161]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:02:54.164680 systemd[1]: Reloading requested from client PID 1251 ('systemctl') (unit ensure-sysext.service)... Sep 13 00:02:54.164719 systemd[1]: Reloading... Sep 13 00:02:54.207581 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:02:54.209931 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 00:02:54.210884 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:02:54.212018 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Sep 13 00:02:54.213368 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Sep 13 00:02:54.219203 systemd-tmpfiles[1252]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:02:54.221596 systemd-tmpfiles[1252]: Skipping /boot Sep 13 00:02:54.241173 systemd-tmpfiles[1252]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:02:54.241594 systemd-tmpfiles[1252]: Skipping /boot Sep 13 00:02:54.276582 zram_generator::config[1280]: No configuration found. Sep 13 00:02:54.397458 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:02:54.446509 systemd[1]: Reloading finished in 281 ms. Sep 13 00:02:54.457756 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 13 00:02:54.458836 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 13 00:02:54.464389 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:02:54.477849 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:02:54.481757 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 13 00:02:54.486816 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 13 00:02:54.490786 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:02:54.493770 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:02:54.497844 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 13 00:02:54.506194 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:02:54.514876 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:02:54.518857 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:02:54.526877 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:02:54.528934 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:02:54.540914 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 13 00:02:54.543187 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:02:54.543353 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:02:54.547916 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:02:54.558900 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:02:54.560225 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:02:54.563669 systemd-udevd[1327]: Using default interface naming scheme 'v255'. Sep 13 00:02:54.567057 systemd[1]: Finished ensure-sysext.service. Sep 13 00:02:54.577413 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 13 00:02:54.586043 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 13 00:02:54.592931 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:02:54.593137 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:02:54.606400 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:02:54.606671 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:02:54.618935 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 13 00:02:54.624849 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 13 00:02:54.633809 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:02:54.652841 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:02:54.654910 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:02:54.656639 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:02:54.658359 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:02:54.659458 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:02:54.665440 augenrules[1365]: No rules Sep 13 00:02:54.669055 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:02:54.684295 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:02:54.684371 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:02:54.710899 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 13 00:02:54.714680 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 13 00:02:54.717501 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:02:54.718189 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 13 00:02:54.799486 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 13 00:02:54.845261 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 13 00:02:54.846378 systemd[1]: Reached target time-set.target - System Time Set. Sep 13 00:02:54.855832 systemd-networkd[1361]: lo: Link UP Sep 13 00:02:54.855847 systemd-networkd[1361]: lo: Gained carrier Sep 13 00:02:54.857134 systemd-networkd[1361]: Enumeration completed Sep 13 00:02:54.857995 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:02:54.859685 systemd-timesyncd[1341]: No network connectivity, watching for changes. Sep 13 00:02:54.865818 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 13 00:02:54.883282 systemd-resolved[1324]: Positive Trust Anchors: Sep 13 00:02:54.883702 systemd-resolved[1324]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:02:54.883886 systemd-resolved[1324]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:02:54.891428 systemd-resolved[1324]: Using system hostname 'ci-4081-3-5-n-03d8b9aea3'. Sep 13 00:02:54.894696 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:02:54.896185 systemd[1]: Reached target network.target - Network. Sep 13 00:02:54.896784 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:02:54.919252 systemd-networkd[1361]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:02:54.919265 systemd-networkd[1361]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:02:54.920860 systemd-networkd[1361]: eth0: Link UP Sep 13 00:02:54.920869 systemd-networkd[1361]: eth0: Gained carrier Sep 13 00:02:54.921344 systemd-networkd[1361]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:02:54.954567 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:02:54.985842 systemd-networkd[1361]: eth0: DHCPv4 address 49.13.17.32/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 13 00:02:54.987732 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1360) Sep 13 00:02:54.986688 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Sep 13 00:02:55.017749 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Sep 13 00:02:55.017876 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:02:55.028090 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:02:55.031802 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:02:55.035835 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:02:55.037377 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:02:55.037424 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:02:55.038294 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:02:55.038453 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:02:55.048910 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:02:55.050104 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:02:55.056222 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:02:55.056615 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:02:55.059027 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:02:55.059094 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:02:55.066675 systemd-networkd[1361]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:02:55.066689 systemd-networkd[1361]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:02:55.067296 systemd-networkd[1361]: eth1: Link UP Sep 13 00:02:55.067303 systemd-networkd[1361]: eth1: Gained carrier Sep 13 00:02:55.067318 systemd-networkd[1361]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:02:55.095683 systemd-networkd[1361]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 13 00:02:55.098028 systemd-timesyncd[1341]: Contacted time server 185.207.105.38:123 (2.flatcar.pool.ntp.org). Sep 13 00:02:55.098118 systemd-timesyncd[1341]: Initial clock synchronization to Sat 2025-09-13 00:02:54.979379 UTC. Sep 13 00:02:55.103233 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 13 00:02:55.113011 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 13 00:02:55.123577 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Sep 13 00:02:55.123694 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 13 00:02:55.123711 kernel: [drm] features: -context_init Sep 13 00:02:55.124738 kernel: [drm] number of scanouts: 1 Sep 13 00:02:55.124830 kernel: [drm] number of cap sets: 0 Sep 13 00:02:55.126569 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Sep 13 00:02:55.131570 kernel: Console: switching to colour frame buffer device 160x50 Sep 13 00:02:55.148586 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 13 00:02:55.150152 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:02:55.162000 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 13 00:02:55.163536 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:02:55.163741 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:02:55.175011 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:02:55.246381 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:02:55.288651 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 13 00:02:55.298860 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 13 00:02:55.311366 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:02:55.342448 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 13 00:02:55.344774 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:02:55.345889 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:02:55.347335 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 13 00:02:55.349138 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 13 00:02:55.350471 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 13 00:02:55.351247 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 13 00:02:55.352005 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 13 00:02:55.352728 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:02:55.353482 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:02:55.354158 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:02:55.356511 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 13 00:02:55.361281 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 13 00:02:55.368692 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 13 00:02:55.372420 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 13 00:02:55.373872 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 13 00:02:55.374765 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:02:55.375317 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:02:55.375983 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:02:55.376019 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:02:55.377683 systemd[1]: Starting containerd.service - containerd container runtime... Sep 13 00:02:55.382785 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 13 00:02:55.386727 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 13 00:02:55.389582 lvm[1438]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:02:55.392785 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 13 00:02:55.398684 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 13 00:02:55.399778 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 13 00:02:55.404801 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 13 00:02:55.417731 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 13 00:02:55.423498 jq[1442]: false Sep 13 00:02:55.423772 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Sep 13 00:02:55.429935 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 13 00:02:55.435762 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 13 00:02:55.442060 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 13 00:02:55.446408 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:02:55.446978 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:02:55.451935 systemd[1]: Starting update-engine.service - Update Engine... Sep 13 00:02:55.458846 dbus-daemon[1441]: [system] SELinux support is enabled Sep 13 00:02:55.464705 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 13 00:02:55.465933 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 13 00:02:55.471019 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 13 00:02:55.476748 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:02:55.476943 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 13 00:02:55.481496 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:02:55.483434 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 13 00:02:55.484793 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:02:55.484826 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 13 00:02:55.487796 coreos-metadata[1440]: Sep 13 00:02:55.487 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Sep 13 00:02:55.498214 extend-filesystems[1443]: Found loop4 Sep 13 00:02:55.512680 extend-filesystems[1443]: Found loop5 Sep 13 00:02:55.512680 extend-filesystems[1443]: Found loop6 Sep 13 00:02:55.512680 extend-filesystems[1443]: Found loop7 Sep 13 00:02:55.512680 extend-filesystems[1443]: Found sda Sep 13 00:02:55.512680 extend-filesystems[1443]: Found sda1 Sep 13 00:02:55.512680 extend-filesystems[1443]: Found sda2 Sep 13 00:02:55.512680 extend-filesystems[1443]: Found sda3 Sep 13 00:02:55.512680 extend-filesystems[1443]: Found usr Sep 13 00:02:55.512680 extend-filesystems[1443]: Found sda4 Sep 13 00:02:55.512680 extend-filesystems[1443]: Found sda6 Sep 13 00:02:55.512680 extend-filesystems[1443]: Found sda7 Sep 13 00:02:55.512680 extend-filesystems[1443]: Found sda9 Sep 13 00:02:55.512680 extend-filesystems[1443]: Checking size of /dev/sda9 Sep 13 00:02:55.550148 coreos-metadata[1440]: Sep 13 00:02:55.500 INFO Fetch successful Sep 13 00:02:55.550148 coreos-metadata[1440]: Sep 13 00:02:55.501 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Sep 13 00:02:55.550148 coreos-metadata[1440]: Sep 13 00:02:55.502 INFO Fetch successful Sep 13 00:02:55.515021 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:02:55.551679 extend-filesystems[1443]: Resized partition /dev/sda9 Sep 13 00:02:55.515306 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 13 00:02:55.561525 jq[1454]: true Sep 13 00:02:55.569949 extend-filesystems[1480]: resize2fs 1.47.1 (20-May-2024) Sep 13 00:02:55.574672 jq[1471]: true Sep 13 00:02:55.587919 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Sep 13 00:02:55.587984 tar[1466]: linux-arm64/LICENSE Sep 13 00:02:55.587984 tar[1466]: linux-arm64/helm Sep 13 00:02:55.589191 (ntainerd)[1473]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 13 00:02:55.596632 update_engine[1453]: I20250913 00:02:55.594094 1453 main.cc:92] Flatcar Update Engine starting Sep 13 00:02:55.595885 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:02:55.597590 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 13 00:02:55.606263 systemd[1]: Started update-engine.service - Update Engine. Sep 13 00:02:55.608046 update_engine[1453]: I20250913 00:02:55.607256 1453 update_check_scheduler.cc:74] Next update check in 4m4s Sep 13 00:02:55.609944 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 13 00:02:55.635638 systemd-logind[1452]: New seat seat0. Sep 13 00:02:55.645886 systemd-logind[1452]: Watching system buttons on /dev/input/event0 (Power Button) Sep 13 00:02:55.645910 systemd-logind[1452]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Sep 13 00:02:55.646311 systemd[1]: Started systemd-logind.service - User Login Management. Sep 13 00:02:55.664260 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 13 00:02:55.666326 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 13 00:02:55.705954 bash[1509]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:02:55.704842 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 13 00:02:55.716980 systemd[1]: Starting sshkeys.service... Sep 13 00:02:55.774745 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 13 00:02:55.802153 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1359) Sep 13 00:02:55.803184 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 13 00:02:55.821084 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Sep 13 00:02:55.867346 coreos-metadata[1519]: Sep 13 00:02:55.867 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Sep 13 00:02:55.871476 coreos-metadata[1519]: Sep 13 00:02:55.870 INFO Fetch successful Sep 13 00:02:55.873631 containerd[1473]: time="2025-09-13T00:02:55.873499480Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 13 00:02:55.881189 extend-filesystems[1480]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Sep 13 00:02:55.881189 extend-filesystems[1480]: old_desc_blocks = 1, new_desc_blocks = 5 Sep 13 00:02:55.881189 extend-filesystems[1480]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Sep 13 00:02:55.894687 extend-filesystems[1443]: Resized filesystem in /dev/sda9 Sep 13 00:02:55.894687 extend-filesystems[1443]: Found sr0 Sep 13 00:02:55.892931 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:02:55.893386 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 13 00:02:55.894836 unknown[1519]: wrote ssh authorized keys file for user: core Sep 13 00:02:55.920613 containerd[1473]: time="2025-09-13T00:02:55.920188040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:02:55.922533 containerd[1473]: time="2025-09-13T00:02:55.922045120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:02:55.922533 containerd[1473]: time="2025-09-13T00:02:55.922134240Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:02:55.922533 containerd[1473]: time="2025-09-13T00:02:55.922156520Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:02:55.922533 containerd[1473]: time="2025-09-13T00:02:55.922307640Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 13 00:02:55.922533 containerd[1473]: time="2025-09-13T00:02:55.922326320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 13 00:02:55.922533 containerd[1473]: time="2025-09-13T00:02:55.922396000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:02:55.922533 containerd[1473]: time="2025-09-13T00:02:55.922408360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:02:55.924162 containerd[1473]: time="2025-09-13T00:02:55.924085560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:02:55.924162 containerd[1473]: time="2025-09-13T00:02:55.924161160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:02:55.924266 containerd[1473]: time="2025-09-13T00:02:55.924181240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:02:55.924266 containerd[1473]: time="2025-09-13T00:02:55.924192360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:02:55.924310 containerd[1473]: time="2025-09-13T00:02:55.924282840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:02:55.924507 containerd[1473]: time="2025-09-13T00:02:55.924479360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:02:55.925228 containerd[1473]: time="2025-09-13T00:02:55.924806680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:02:55.925228 containerd[1473]: time="2025-09-13T00:02:55.924828520Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:02:55.925329 containerd[1473]: time="2025-09-13T00:02:55.925302680Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:02:55.925381 containerd[1473]: time="2025-09-13T00:02:55.925364440Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:02:55.933708 containerd[1473]: time="2025-09-13T00:02:55.933657000Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:02:55.933804 containerd[1473]: time="2025-09-13T00:02:55.933745880Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:02:55.933804 containerd[1473]: time="2025-09-13T00:02:55.933768120Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 13 00:02:55.934040 containerd[1473]: time="2025-09-13T00:02:55.933787480Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 13 00:02:55.934040 containerd[1473]: time="2025-09-13T00:02:55.933888680Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:02:55.934127 containerd[1473]: time="2025-09-13T00:02:55.934084680Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:02:55.934526 containerd[1473]: time="2025-09-13T00:02:55.934489720Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:02:55.935500 containerd[1473]: time="2025-09-13T00:02:55.934685120Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 13 00:02:55.935500 containerd[1473]: time="2025-09-13T00:02:55.934716240Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 13 00:02:55.935500 containerd[1473]: time="2025-09-13T00:02:55.934734200Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 13 00:02:55.935500 containerd[1473]: time="2025-09-13T00:02:55.934763040Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:02:55.935500 containerd[1473]: time="2025-09-13T00:02:55.934780680Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:02:55.935500 containerd[1473]: time="2025-09-13T00:02:55.934797200Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:02:55.935500 containerd[1473]: time="2025-09-13T00:02:55.934816280Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:02:55.935500 containerd[1473]: time="2025-09-13T00:02:55.934835640Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:02:55.935500 containerd[1473]: time="2025-09-13T00:02:55.934851640Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:02:55.935500 containerd[1473]: time="2025-09-13T00:02:55.934866720Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:02:55.935500 containerd[1473]: time="2025-09-13T00:02:55.934882360Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:02:55.935500 containerd[1473]: time="2025-09-13T00:02:55.934907920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:02:55.935500 containerd[1473]: time="2025-09-13T00:02:55.934935400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:02:55.935500 containerd[1473]: time="2025-09-13T00:02:55.934951320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:02:55.937711 containerd[1473]: time="2025-09-13T00:02:55.934971520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:02:55.937711 containerd[1473]: time="2025-09-13T00:02:55.934987840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:02:55.937711 containerd[1473]: time="2025-09-13T00:02:55.935004200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:02:55.937711 containerd[1473]: time="2025-09-13T00:02:55.935020080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:02:55.937711 containerd[1473]: time="2025-09-13T00:02:55.935037160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:02:55.937711 containerd[1473]: time="2025-09-13T00:02:55.935055200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 13 00:02:55.937711 containerd[1473]: time="2025-09-13T00:02:55.935073720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 13 00:02:55.937711 containerd[1473]: time="2025-09-13T00:02:55.935104080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:02:55.937711 containerd[1473]: time="2025-09-13T00:02:55.935127720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 13 00:02:55.937711 containerd[1473]: time="2025-09-13T00:02:55.935143480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:02:55.937711 containerd[1473]: time="2025-09-13T00:02:55.935162880Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 13 00:02:55.937711 containerd[1473]: time="2025-09-13T00:02:55.935198800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 13 00:02:55.937711 containerd[1473]: time="2025-09-13T00:02:55.935214320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:02:55.937711 containerd[1473]: time="2025-09-13T00:02:55.935228360Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:02:55.936850 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 13 00:02:55.938022 update-ssh-keys[1527]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:02:55.941275 containerd[1473]: time="2025-09-13T00:02:55.937351120Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:02:55.941275 containerd[1473]: time="2025-09-13T00:02:55.939657440Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 13 00:02:55.941275 containerd[1473]: time="2025-09-13T00:02:55.939696680Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:02:55.941275 containerd[1473]: time="2025-09-13T00:02:55.939720080Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 13 00:02:55.941275 containerd[1473]: time="2025-09-13T00:02:55.939734680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:02:55.941275 containerd[1473]: time="2025-09-13T00:02:55.939763760Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 13 00:02:55.941275 containerd[1473]: time="2025-09-13T00:02:55.939781400Z" level=info msg="NRI interface is disabled by configuration." Sep 13 00:02:55.941275 containerd[1473]: time="2025-09-13T00:02:55.939796880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:02:55.941603 containerd[1473]: time="2025-09-13T00:02:55.940293400Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:02:55.941603 containerd[1473]: time="2025-09-13T00:02:55.940379400Z" level=info msg="Connect containerd service" Sep 13 00:02:55.941603 containerd[1473]: time="2025-09-13T00:02:55.940505160Z" level=info msg="using legacy CRI server" Sep 13 00:02:55.941603 containerd[1473]: time="2025-09-13T00:02:55.940524080Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 13 00:02:55.941603 containerd[1473]: time="2025-09-13T00:02:55.940781000Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:02:55.945487 containerd[1473]: time="2025-09-13T00:02:55.941900680Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:02:55.945487 containerd[1473]: time="2025-09-13T00:02:55.942687520Z" level=info msg="Start subscribing containerd event" Sep 13 00:02:55.945487 containerd[1473]: time="2025-09-13T00:02:55.942733800Z" level=info msg="Start recovering state" Sep 13 00:02:55.945487 containerd[1473]: time="2025-09-13T00:02:55.942820400Z" level=info msg="Start event monitor" Sep 13 00:02:55.945487 containerd[1473]: time="2025-09-13T00:02:55.942833800Z" level=info msg="Start snapshots syncer" Sep 13 00:02:55.945487 containerd[1473]: time="2025-09-13T00:02:55.942843920Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:02:55.945487 containerd[1473]: time="2025-09-13T00:02:55.942851200Z" level=info msg="Start streaming server" Sep 13 00:02:55.945487 containerd[1473]: time="2025-09-13T00:02:55.943254440Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:02:55.945487 containerd[1473]: time="2025-09-13T00:02:55.943301760Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:02:55.945487 containerd[1473]: time="2025-09-13T00:02:55.943353000Z" level=info msg="containerd successfully booted in 0.082491s" Sep 13 00:02:55.943618 systemd[1]: Finished sshkeys.service. Sep 13 00:02:55.944429 systemd[1]: Started containerd.service - containerd container runtime. Sep 13 00:02:55.958970 locksmithd[1494]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:02:56.253654 systemd-networkd[1361]: eth0: Gained IPv6LL Sep 13 00:02:56.256776 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 13 00:02:56.259004 systemd[1]: Reached target network-online.target - Network is Online. Sep 13 00:02:56.267804 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:02:56.271321 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 13 00:02:56.337567 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 13 00:02:56.353874 tar[1466]: linux-arm64/README.md Sep 13 00:02:56.369751 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 13 00:02:56.599662 sshd_keygen[1488]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:02:56.622226 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 13 00:02:56.633169 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 13 00:02:56.646631 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:02:56.648648 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 13 00:02:56.655969 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 13 00:02:56.667247 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 13 00:02:56.675204 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 13 00:02:56.679069 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 13 00:02:56.680829 systemd[1]: Reached target getty.target - Login Prompts. Sep 13 00:02:56.765734 systemd-networkd[1361]: eth1: Gained IPv6LL Sep 13 00:02:57.137042 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:02:57.138572 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 13 00:02:57.139661 systemd[1]: Startup finished in 774ms (kernel) + 4.958s (initrd) + 4.528s (userspace) = 10.261s. Sep 13 00:02:57.150896 (kubelet)[1572]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:02:57.680991 kubelet[1572]: E0913 00:02:57.680889 1572 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:02:57.682755 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:02:57.683003 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:03:07.875567 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:03:07.889944 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:03:08.016909 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:03:08.023359 (kubelet)[1590]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:03:08.080418 kubelet[1590]: E0913 00:03:08.080337 1590 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:03:08.083242 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:03:08.083494 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:03:18.126028 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 00:03:18.132847 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:03:18.257515 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:03:18.273233 (kubelet)[1606]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:03:18.326310 kubelet[1606]: E0913 00:03:18.326259 1606 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:03:18.330598 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:03:18.331062 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:03:28.375264 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 13 00:03:28.385882 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:03:28.511990 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:03:28.528197 (kubelet)[1620]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:03:28.586122 kubelet[1620]: E0913 00:03:28.586020 1620 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:03:28.589595 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:03:28.589920 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:03:35.421659 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 13 00:03:35.435148 systemd[1]: Started sshd@0-49.13.17.32:22-147.75.109.163:57624.service - OpenSSH per-connection server daemon (147.75.109.163:57624). Sep 13 00:03:36.432838 sshd[1628]: Accepted publickey for core from 147.75.109.163 port 57624 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:03:36.435622 sshd[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:03:36.445575 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 13 00:03:36.453026 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 13 00:03:36.456221 systemd-logind[1452]: New session 1 of user core. Sep 13 00:03:36.468773 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 13 00:03:36.480418 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 13 00:03:36.484843 (systemd)[1632]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:03:36.599994 systemd[1632]: Queued start job for default target default.target. Sep 13 00:03:36.610677 systemd[1632]: Created slice app.slice - User Application Slice. Sep 13 00:03:36.611008 systemd[1632]: Reached target paths.target - Paths. Sep 13 00:03:36.611046 systemd[1632]: Reached target timers.target - Timers. Sep 13 00:03:36.613604 systemd[1632]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 13 00:03:36.632086 systemd[1632]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 13 00:03:36.632534 systemd[1632]: Reached target sockets.target - Sockets. Sep 13 00:03:36.632642 systemd[1632]: Reached target basic.target - Basic System. Sep 13 00:03:36.632726 systemd[1632]: Reached target default.target - Main User Target. Sep 13 00:03:36.632773 systemd[1632]: Startup finished in 140ms. Sep 13 00:03:36.632969 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 13 00:03:36.643885 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 13 00:03:37.349216 systemd[1]: Started sshd@1-49.13.17.32:22-147.75.109.163:57638.service - OpenSSH per-connection server daemon (147.75.109.163:57638). Sep 13 00:03:38.344872 sshd[1643]: Accepted publickey for core from 147.75.109.163 port 57638 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:03:38.347585 sshd[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:03:38.354494 systemd-logind[1452]: New session 2 of user core. Sep 13 00:03:38.363930 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 13 00:03:38.626037 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 13 00:03:38.636922 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:03:38.784927 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:03:38.786514 (kubelet)[1654]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:03:38.836305 kubelet[1654]: E0913 00:03:38.836200 1654 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:03:38.838602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:03:38.838748 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:03:39.037355 sshd[1643]: pam_unix(sshd:session): session closed for user core Sep 13 00:03:39.044038 systemd-logind[1452]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:03:39.045013 systemd[1]: sshd@1-49.13.17.32:22-147.75.109.163:57638.service: Deactivated successfully. Sep 13 00:03:39.047491 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:03:39.048942 systemd-logind[1452]: Removed session 2. Sep 13 00:03:39.220273 systemd[1]: Started sshd@2-49.13.17.32:22-147.75.109.163:57640.service - OpenSSH per-connection server daemon (147.75.109.163:57640). Sep 13 00:03:40.206856 sshd[1665]: Accepted publickey for core from 147.75.109.163 port 57640 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:03:40.210648 sshd[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:03:40.216937 systemd-logind[1452]: New session 3 of user core. Sep 13 00:03:40.228930 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 13 00:03:40.880659 update_engine[1453]: I20250913 00:03:40.880126 1453 update_attempter.cc:509] Updating boot flags... Sep 13 00:03:40.895707 sshd[1665]: pam_unix(sshd:session): session closed for user core Sep 13 00:03:40.901381 systemd[1]: sshd@2-49.13.17.32:22-147.75.109.163:57640.service: Deactivated successfully. Sep 13 00:03:40.907211 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:03:40.910301 systemd-logind[1452]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:03:40.912478 systemd-logind[1452]: Removed session 3. Sep 13 00:03:40.939309 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1680) Sep 13 00:03:41.067065 systemd[1]: Started sshd@3-49.13.17.32:22-147.75.109.163:36672.service - OpenSSH per-connection server daemon (147.75.109.163:36672). Sep 13 00:03:42.046939 sshd[1687]: Accepted publickey for core from 147.75.109.163 port 36672 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:03:42.049760 sshd[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:03:42.057192 systemd-logind[1452]: New session 4 of user core. Sep 13 00:03:42.068812 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 13 00:03:42.730903 sshd[1687]: pam_unix(sshd:session): session closed for user core Sep 13 00:03:42.739217 systemd-logind[1452]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:03:42.740343 systemd[1]: sshd@3-49.13.17.32:22-147.75.109.163:36672.service: Deactivated successfully. Sep 13 00:03:42.742294 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:03:42.744170 systemd-logind[1452]: Removed session 4. Sep 13 00:03:42.913563 systemd[1]: Started sshd@4-49.13.17.32:22-147.75.109.163:36676.service - OpenSSH per-connection server daemon (147.75.109.163:36676). Sep 13 00:03:43.984865 sshd[1694]: Accepted publickey for core from 147.75.109.163 port 36676 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:03:43.987396 sshd[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:03:43.993338 systemd-logind[1452]: New session 5 of user core. Sep 13 00:03:44.006039 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 13 00:03:44.556338 sudo[1697]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 00:03:44.557343 sudo[1697]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:03:44.578601 sudo[1697]: pam_unix(sudo:session): session closed for user root Sep 13 00:03:44.751696 sshd[1694]: pam_unix(sshd:session): session closed for user core Sep 13 00:03:44.759134 systemd[1]: sshd@4-49.13.17.32:22-147.75.109.163:36676.service: Deactivated successfully. Sep 13 00:03:44.763263 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:03:44.764674 systemd-logind[1452]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:03:44.766390 systemd-logind[1452]: Removed session 5. Sep 13 00:03:44.929193 systemd[1]: Started sshd@5-49.13.17.32:22-147.75.109.163:36690.service - OpenSSH per-connection server daemon (147.75.109.163:36690). Sep 13 00:03:45.902917 sshd[1702]: Accepted publickey for core from 147.75.109.163 port 36690 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:03:45.906065 sshd[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:03:45.913509 systemd-logind[1452]: New session 6 of user core. Sep 13 00:03:45.919962 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 13 00:03:46.430012 sudo[1706]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 00:03:46.430382 sudo[1706]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:03:46.437353 sudo[1706]: pam_unix(sudo:session): session closed for user root Sep 13 00:03:46.443974 sudo[1705]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 13 00:03:46.444280 sudo[1705]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:03:46.464054 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 13 00:03:46.465830 auditctl[1709]: No rules Sep 13 00:03:46.466680 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:03:46.466963 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 13 00:03:46.474597 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:03:46.506904 augenrules[1727]: No rules Sep 13 00:03:46.508790 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:03:46.511098 sudo[1705]: pam_unix(sudo:session): session closed for user root Sep 13 00:03:46.670420 sshd[1702]: pam_unix(sshd:session): session closed for user core Sep 13 00:03:46.677380 systemd[1]: sshd@5-49.13.17.32:22-147.75.109.163:36690.service: Deactivated successfully. Sep 13 00:03:46.680690 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:03:46.682382 systemd-logind[1452]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:03:46.683642 systemd-logind[1452]: Removed session 6. Sep 13 00:03:46.853153 systemd[1]: Started sshd@6-49.13.17.32:22-147.75.109.163:36692.service - OpenSSH per-connection server daemon (147.75.109.163:36692). Sep 13 00:03:47.845063 sshd[1735]: Accepted publickey for core from 147.75.109.163 port 36692 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:03:47.846961 sshd[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:03:47.853073 systemd-logind[1452]: New session 7 of user core. Sep 13 00:03:47.866949 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 13 00:03:48.376293 sudo[1738]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:03:48.376820 sudo[1738]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:03:48.689006 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 13 00:03:48.689991 (dockerd)[1753]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 13 00:03:48.875330 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 13 00:03:48.888085 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:03:48.969159 dockerd[1753]: time="2025-09-13T00:03:48.969004388Z" level=info msg="Starting up" Sep 13 00:03:49.059796 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:03:49.062959 (kubelet)[1780]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:03:49.096336 dockerd[1753]: time="2025-09-13T00:03:49.095647357Z" level=info msg="Loading containers: start." Sep 13 00:03:49.112112 kubelet[1780]: E0913 00:03:49.109786 1780 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:03:49.112705 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:03:49.112833 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:03:49.207627 kernel: Initializing XFRM netlink socket Sep 13 00:03:49.293087 systemd-networkd[1361]: docker0: Link UP Sep 13 00:03:49.315003 dockerd[1753]: time="2025-09-13T00:03:49.314912874Z" level=info msg="Loading containers: done." Sep 13 00:03:49.329645 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1849306023-merged.mount: Deactivated successfully. Sep 13 00:03:49.335334 dockerd[1753]: time="2025-09-13T00:03:49.335254463Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:03:49.335517 dockerd[1753]: time="2025-09-13T00:03:49.335393543Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 13 00:03:49.335617 dockerd[1753]: time="2025-09-13T00:03:49.335521702Z" level=info msg="Daemon has completed initialization" Sep 13 00:03:49.381852 dockerd[1753]: time="2025-09-13T00:03:49.381659877Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:03:49.383155 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 13 00:03:50.529363 containerd[1473]: time="2025-09-13T00:03:50.529319527Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 13 00:03:51.186763 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2011116201.mount: Deactivated successfully. Sep 13 00:03:52.223584 containerd[1473]: time="2025-09-13T00:03:52.223458211Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:52.225352 containerd[1473]: time="2025-09-13T00:03:52.225300211Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=27390326" Sep 13 00:03:52.227326 containerd[1473]: time="2025-09-13T00:03:52.225786610Z" level=info msg="ImageCreate event name:\"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:52.229859 containerd[1473]: time="2025-09-13T00:03:52.229813168Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:52.231164 containerd[1473]: time="2025-09-13T00:03:52.231119968Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"27386827\" in 1.701753961s" Sep 13 00:03:52.231320 containerd[1473]: time="2025-09-13T00:03:52.231303688Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\"" Sep 13 00:03:52.233142 containerd[1473]: time="2025-09-13T00:03:52.233103327Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 13 00:03:53.737483 containerd[1473]: time="2025-09-13T00:03:53.737151000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:53.739326 containerd[1473]: time="2025-09-13T00:03:53.739231479Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=23547937" Sep 13 00:03:53.741536 containerd[1473]: time="2025-09-13T00:03:53.740503438Z" level=info msg="ImageCreate event name:\"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:53.745154 containerd[1473]: time="2025-09-13T00:03:53.745110716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:53.746318 containerd[1473]: time="2025-09-13T00:03:53.746270635Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"25135832\" in 1.513124388s" Sep 13 00:03:53.746318 containerd[1473]: time="2025-09-13T00:03:53.746311395Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\"" Sep 13 00:03:53.748592 containerd[1473]: time="2025-09-13T00:03:53.748535914Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 13 00:03:54.864593 containerd[1473]: time="2025-09-13T00:03:54.864261922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:54.866266 containerd[1473]: time="2025-09-13T00:03:54.866206601Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=18295997" Sep 13 00:03:54.867719 containerd[1473]: time="2025-09-13T00:03:54.867652360Z" level=info msg="ImageCreate event name:\"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:54.873583 containerd[1473]: time="2025-09-13T00:03:54.872270278Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:54.873815 containerd[1473]: time="2025-09-13T00:03:54.873777237Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"19883910\" in 1.125033963s" Sep 13 00:03:54.873918 containerd[1473]: time="2025-09-13T00:03:54.873897717Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\"" Sep 13 00:03:54.875454 containerd[1473]: time="2025-09-13T00:03:54.875421997Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 13 00:03:55.906050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1234850885.mount: Deactivated successfully. Sep 13 00:03:56.288011 containerd[1473]: time="2025-09-13T00:03:56.287829740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:56.289865 containerd[1473]: time="2025-09-13T00:03:56.289474739Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=28240132" Sep 13 00:03:56.289865 containerd[1473]: time="2025-09-13T00:03:56.289783019Z" level=info msg="ImageCreate event name:\"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:56.293562 containerd[1473]: time="2025-09-13T00:03:56.293305978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:56.294096 containerd[1473]: time="2025-09-13T00:03:56.294053697Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"28239125\" in 1.41844682s" Sep 13 00:03:56.294096 containerd[1473]: time="2025-09-13T00:03:56.294094497Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\"" Sep 13 00:03:56.294938 containerd[1473]: time="2025-09-13T00:03:56.294713937Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 13 00:03:56.859516 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3153014612.mount: Deactivated successfully. Sep 13 00:03:57.660920 containerd[1473]: time="2025-09-13T00:03:57.660805210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:57.662944 containerd[1473]: time="2025-09-13T00:03:57.662864929Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152209" Sep 13 00:03:57.664685 containerd[1473]: time="2025-09-13T00:03:57.664632569Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:57.669804 containerd[1473]: time="2025-09-13T00:03:57.669727327Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:57.671573 containerd[1473]: time="2025-09-13T00:03:57.671154086Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.376401909s" Sep 13 00:03:57.671573 containerd[1473]: time="2025-09-13T00:03:57.671210126Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 13 00:03:57.672882 containerd[1473]: time="2025-09-13T00:03:57.672643325Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:03:58.282918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3131558060.mount: Deactivated successfully. Sep 13 00:03:58.291626 containerd[1473]: time="2025-09-13T00:03:58.291580597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:58.293849 containerd[1473]: time="2025-09-13T00:03:58.293365357Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Sep 13 00:03:58.295015 containerd[1473]: time="2025-09-13T00:03:58.294973516Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:58.298952 containerd[1473]: time="2025-09-13T00:03:58.298854035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:58.300331 containerd[1473]: time="2025-09-13T00:03:58.299679954Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 626.997069ms" Sep 13 00:03:58.300331 containerd[1473]: time="2025-09-13T00:03:58.299719474Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 13 00:03:58.300726 containerd[1473]: time="2025-09-13T00:03:58.300700034Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 13 00:03:58.891483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount91224145.mount: Deactivated successfully. Sep 13 00:03:59.125269 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Sep 13 00:03:59.131838 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:03:59.275715 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:03:59.285972 (kubelet)[2087]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:03:59.344194 kubelet[2087]: E0913 00:03:59.343992 2087 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:03:59.347487 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:03:59.347666 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:04:01.047585 containerd[1473]: time="2025-09-13T00:04:01.045312875Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:04:01.047585 containerd[1473]: time="2025-09-13T00:04:01.046895314Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465913" Sep 13 00:04:01.048618 containerd[1473]: time="2025-09-13T00:04:01.048395914Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:04:01.054074 containerd[1473]: time="2025-09-13T00:04:01.054018992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:04:01.056450 containerd[1473]: time="2025-09-13T00:04:01.055624311Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.754821797s" Sep 13 00:04:01.056450 containerd[1473]: time="2025-09-13T00:04:01.055671951Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 13 00:04:05.799116 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:04:05.810709 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:04:05.854596 systemd[1]: Reloading requested from client PID 2134 ('systemctl') (unit session-7.scope)... Sep 13 00:04:05.854615 systemd[1]: Reloading... Sep 13 00:04:05.970130 zram_generator::config[2174]: No configuration found. Sep 13 00:04:06.079665 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:04:06.155599 systemd[1]: Reloading finished in 300 ms. Sep 13 00:04:06.218455 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 00:04:06.218810 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 00:04:06.219303 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:04:06.233252 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:04:06.358887 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:04:06.359077 (kubelet)[2222]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:04:06.411070 kubelet[2222]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:04:06.411482 kubelet[2222]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:04:06.411576 kubelet[2222]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:04:06.411778 kubelet[2222]: I0913 00:04:06.411736 2222 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:04:07.242399 kubelet[2222]: I0913 00:04:07.242346 2222 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 13 00:04:07.242619 kubelet[2222]: I0913 00:04:07.242607 2222 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:04:07.242959 kubelet[2222]: I0913 00:04:07.242937 2222 server.go:956] "Client rotation is on, will bootstrap in background" Sep 13 00:04:07.267360 kubelet[2222]: E0913 00:04:07.267300 2222 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://49.13.17.32:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 49.13.17.32:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 13 00:04:07.271897 kubelet[2222]: I0913 00:04:07.271849 2222 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:04:07.282318 kubelet[2222]: E0913 00:04:07.282265 2222 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:04:07.282318 kubelet[2222]: I0913 00:04:07.282325 2222 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:04:07.285493 kubelet[2222]: I0913 00:04:07.285089 2222 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:04:07.286675 kubelet[2222]: I0913 00:04:07.286627 2222 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:04:07.287002 kubelet[2222]: I0913 00:04:07.286795 2222 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-n-03d8b9aea3","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:04:07.287496 kubelet[2222]: I0913 00:04:07.287184 2222 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:04:07.287496 kubelet[2222]: I0913 00:04:07.287205 2222 container_manager_linux.go:303] "Creating device plugin manager" Sep 13 00:04:07.287496 kubelet[2222]: I0913 00:04:07.287427 2222 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:04:07.291386 kubelet[2222]: I0913 00:04:07.291348 2222 kubelet.go:480] "Attempting to sync node with API server" Sep 13 00:04:07.291582 kubelet[2222]: I0913 00:04:07.291568 2222 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:04:07.291674 kubelet[2222]: I0913 00:04:07.291664 2222 kubelet.go:386] "Adding apiserver pod source" Sep 13 00:04:07.293504 kubelet[2222]: I0913 00:04:07.293475 2222 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:04:07.299343 kubelet[2222]: E0913 00:04:07.299235 2222 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://49.13.17.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-03d8b9aea3&limit=500&resourceVersion=0\": dial tcp 49.13.17.32:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 13 00:04:07.302620 kubelet[2222]: E0913 00:04:07.302565 2222 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://49.13.17.32:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 49.13.17.32:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 13 00:04:07.303449 kubelet[2222]: I0913 00:04:07.302811 2222 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:04:07.304134 kubelet[2222]: I0913 00:04:07.304081 2222 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 13 00:04:07.304269 kubelet[2222]: W0913 00:04:07.304240 2222 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:04:07.307672 kubelet[2222]: I0913 00:04:07.307568 2222 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:04:07.307810 kubelet[2222]: I0913 00:04:07.307685 2222 server.go:1289] "Started kubelet" Sep 13 00:04:07.308447 kubelet[2222]: I0913 00:04:07.307889 2222 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:04:07.308447 kubelet[2222]: I0913 00:04:07.308090 2222 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:04:07.309207 kubelet[2222]: I0913 00:04:07.309176 2222 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:04:07.309606 kubelet[2222]: I0913 00:04:07.309584 2222 server.go:317] "Adding debug handlers to kubelet server" Sep 13 00:04:07.312190 kubelet[2222]: I0913 00:04:07.312021 2222 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:04:07.318124 kubelet[2222]: E0913 00:04:07.315887 2222 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://49.13.17.32:6443/api/v1/namespaces/default/events\": dial tcp 49.13.17.32:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-5-n-03d8b9aea3.1864aeb512b730f4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-5-n-03d8b9aea3,UID:ci-4081-3-5-n-03d8b9aea3,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-n-03d8b9aea3,},FirstTimestamp:2025-09-13 00:04:07.307645172 +0000 UTC m=+0.941130554,LastTimestamp:2025-09-13 00:04:07.307645172 +0000 UTC m=+0.941130554,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-n-03d8b9aea3,}" Sep 13 00:04:07.318376 kubelet[2222]: I0913 00:04:07.318152 2222 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:04:07.326568 kubelet[2222]: I0913 00:04:07.325681 2222 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:04:07.326693 kubelet[2222]: E0913 00:04:07.326652 2222 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-03d8b9aea3\" not found" Sep 13 00:04:07.327086 kubelet[2222]: I0913 00:04:07.327026 2222 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:04:07.327158 kubelet[2222]: I0913 00:04:07.327146 2222 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:04:07.328616 kubelet[2222]: E0913 00:04:07.327927 2222 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://49.13.17.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 49.13.17.32:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 13 00:04:07.328616 kubelet[2222]: E0913 00:04:07.328014 2222 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.17.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-03d8b9aea3?timeout=10s\": dial tcp 49.13.17.32:6443: connect: connection refused" interval="200ms" Sep 13 00:04:07.328616 kubelet[2222]: I0913 00:04:07.328261 2222 factory.go:223] Registration of the systemd container factory successfully Sep 13 00:04:07.329833 kubelet[2222]: I0913 00:04:07.329589 2222 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:04:07.331250 kubelet[2222]: I0913 00:04:07.331221 2222 factory.go:223] Registration of the containerd container factory successfully Sep 13 00:04:07.333683 kubelet[2222]: E0913 00:04:07.333651 2222 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:04:07.358086 kubelet[2222]: I0913 00:04:07.358024 2222 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 13 00:04:07.360044 kubelet[2222]: I0913 00:04:07.360005 2222 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 13 00:04:07.360044 kubelet[2222]: I0913 00:04:07.360045 2222 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 13 00:04:07.360291 kubelet[2222]: I0913 00:04:07.360073 2222 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:04:07.360291 kubelet[2222]: I0913 00:04:07.360083 2222 kubelet.go:2436] "Starting kubelet main sync loop" Sep 13 00:04:07.360291 kubelet[2222]: E0913 00:04:07.360132 2222 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:04:07.362295 kubelet[2222]: E0913 00:04:07.362253 2222 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://49.13.17.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 49.13.17.32:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 13 00:04:07.364342 kubelet[2222]: I0913 00:04:07.364041 2222 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:04:07.364342 kubelet[2222]: I0913 00:04:07.364064 2222 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:04:07.364342 kubelet[2222]: I0913 00:04:07.364093 2222 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:04:07.367174 kubelet[2222]: I0913 00:04:07.367145 2222 policy_none.go:49] "None policy: Start" Sep 13 00:04:07.367338 kubelet[2222]: I0913 00:04:07.367325 2222 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:04:07.367412 kubelet[2222]: I0913 00:04:07.367403 2222 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:04:07.374823 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 13 00:04:07.385558 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 13 00:04:07.398029 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 13 00:04:07.400258 kubelet[2222]: E0913 00:04:07.400235 2222 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 13 00:04:07.400453 kubelet[2222]: I0913 00:04:07.400440 2222 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:04:07.400495 kubelet[2222]: I0913 00:04:07.400455 2222 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:04:07.401034 kubelet[2222]: I0913 00:04:07.401019 2222 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:04:07.403408 kubelet[2222]: E0913 00:04:07.403369 2222 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:04:07.403408 kubelet[2222]: E0913 00:04:07.403410 2222 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-5-n-03d8b9aea3\" not found" Sep 13 00:04:07.478432 systemd[1]: Created slice kubepods-burstable-pod1b7de3a2d3a76ba4f8e0ed41950c57c0.slice - libcontainer container kubepods-burstable-pod1b7de3a2d3a76ba4f8e0ed41950c57c0.slice. Sep 13 00:04:07.495436 kubelet[2222]: E0913 00:04:07.495111 2222 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-03d8b9aea3\" not found" node="ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:07.500063 systemd[1]: Created slice kubepods-burstable-pod3a70b6b03c3d6b569a82775d1f713534.slice - libcontainer container kubepods-burstable-pod3a70b6b03c3d6b569a82775d1f713534.slice. Sep 13 00:04:07.504946 kubelet[2222]: I0913 00:04:07.504352 2222 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:07.504946 kubelet[2222]: E0913 00:04:07.504505 2222 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-03d8b9aea3\" not found" node="ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:07.504946 kubelet[2222]: E0913 00:04:07.504878 2222 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://49.13.17.32:6443/api/v1/nodes\": dial tcp 49.13.17.32:6443: connect: connection refused" node="ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:07.508449 systemd[1]: Created slice kubepods-burstable-podd037bb68a70a4e1cf67c4afc8fdc14ed.slice - libcontainer container kubepods-burstable-podd037bb68a70a4e1cf67c4afc8fdc14ed.slice. Sep 13 00:04:07.510870 kubelet[2222]: E0913 00:04:07.510831 2222 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-03d8b9aea3\" not found" node="ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:07.528022 kubelet[2222]: I0913 00:04:07.527955 2222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3a70b6b03c3d6b569a82775d1f713534-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-03d8b9aea3\" (UID: \"3a70b6b03c3d6b569a82775d1f713534\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:07.528022 kubelet[2222]: I0913 00:04:07.528136 2222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3a70b6b03c3d6b569a82775d1f713534-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-n-03d8b9aea3\" (UID: \"3a70b6b03c3d6b569a82775d1f713534\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:07.528022 kubelet[2222]: I0913 00:04:07.528227 2222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1b7de3a2d3a76ba4f8e0ed41950c57c0-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-n-03d8b9aea3\" (UID: \"1b7de3a2d3a76ba4f8e0ed41950c57c0\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:07.528022 kubelet[2222]: I0913 00:04:07.528276 2222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1b7de3a2d3a76ba4f8e0ed41950c57c0-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-n-03d8b9aea3\" (UID: \"1b7de3a2d3a76ba4f8e0ed41950c57c0\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:07.528022 kubelet[2222]: I0913 00:04:07.528348 2222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3a70b6b03c3d6b569a82775d1f713534-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-03d8b9aea3\" (UID: \"3a70b6b03c3d6b569a82775d1f713534\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:07.529035 kubelet[2222]: I0913 00:04:07.528394 2222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3a70b6b03c3d6b569a82775d1f713534-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-n-03d8b9aea3\" (UID: \"3a70b6b03c3d6b569a82775d1f713534\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:07.529035 kubelet[2222]: I0913 00:04:07.528455 2222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3a70b6b03c3d6b569a82775d1f713534-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-n-03d8b9aea3\" (UID: \"3a70b6b03c3d6b569a82775d1f713534\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:07.529035 kubelet[2222]: I0913 00:04:07.528496 2222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d037bb68a70a4e1cf67c4afc8fdc14ed-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-n-03d8b9aea3\" (UID: \"d037bb68a70a4e1cf67c4afc8fdc14ed\") " pod="kube-system/kube-scheduler-ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:07.529035 kubelet[2222]: I0913 00:04:07.528642 2222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1b7de3a2d3a76ba4f8e0ed41950c57c0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-n-03d8b9aea3\" (UID: \"1b7de3a2d3a76ba4f8e0ed41950c57c0\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:07.529035 kubelet[2222]: E0913 00:04:07.528934 2222 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.17.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-03d8b9aea3?timeout=10s\": dial tcp 49.13.17.32:6443: connect: connection refused" interval="400ms" Sep 13 00:04:07.707990 kubelet[2222]: I0913 00:04:07.707832 2222 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:07.708586 kubelet[2222]: E0913 00:04:07.708442 2222 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://49.13.17.32:6443/api/v1/nodes\": dial tcp 49.13.17.32:6443: connect: connection refused" node="ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:07.798732 containerd[1473]: time="2025-09-13T00:04:07.798576264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-n-03d8b9aea3,Uid:1b7de3a2d3a76ba4f8e0ed41950c57c0,Namespace:kube-system,Attempt:0,}" Sep 13 00:04:07.806646 containerd[1473]: time="2025-09-13T00:04:07.806510062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-n-03d8b9aea3,Uid:3a70b6b03c3d6b569a82775d1f713534,Namespace:kube-system,Attempt:0,}" Sep 13 00:04:07.812508 containerd[1473]: time="2025-09-13T00:04:07.812459300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-n-03d8b9aea3,Uid:d037bb68a70a4e1cf67c4afc8fdc14ed,Namespace:kube-system,Attempt:0,}" Sep 13 00:04:07.930405 kubelet[2222]: E0913 00:04:07.930322 2222 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.17.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-03d8b9aea3?timeout=10s\": dial tcp 49.13.17.32:6443: connect: connection refused" interval="800ms" Sep 13 00:04:08.113287 kubelet[2222]: I0913 00:04:08.112467 2222 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:08.113287 kubelet[2222]: E0913 00:04:08.112977 2222 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://49.13.17.32:6443/api/v1/nodes\": dial tcp 49.13.17.32:6443: connect: connection refused" node="ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:08.264294 kubelet[2222]: E0913 00:04:08.264214 2222 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://49.13.17.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 49.13.17.32:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 13 00:04:08.355973 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2166746849.mount: Deactivated successfully. Sep 13 00:04:08.367420 containerd[1473]: time="2025-09-13T00:04:08.366361296Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:04:08.369116 containerd[1473]: time="2025-09-13T00:04:08.368172816Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:04:08.369488 containerd[1473]: time="2025-09-13T00:04:08.369247375Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Sep 13 00:04:08.370773 containerd[1473]: time="2025-09-13T00:04:08.370669495Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:04:08.372061 containerd[1473]: time="2025-09-13T00:04:08.371999935Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:04:08.373448 containerd[1473]: time="2025-09-13T00:04:08.373363854Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:04:08.375141 containerd[1473]: time="2025-09-13T00:04:08.374902654Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:04:08.377096 containerd[1473]: time="2025-09-13T00:04:08.377001453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:04:08.379423 containerd[1473]: time="2025-09-13T00:04:08.379365253Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 572.713191ms" Sep 13 00:04:08.381583 containerd[1473]: time="2025-09-13T00:04:08.381512212Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 582.844988ms" Sep 13 00:04:08.385464 containerd[1473]: time="2025-09-13T00:04:08.385411451Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 572.424471ms" Sep 13 00:04:08.508590 containerd[1473]: time="2025-09-13T00:04:08.508277055Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:04:08.508892 containerd[1473]: time="2025-09-13T00:04:08.508519295Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:04:08.509147 containerd[1473]: time="2025-09-13T00:04:08.509045575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:04:08.509454 containerd[1473]: time="2025-09-13T00:04:08.509339494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:04:08.516993 containerd[1473]: time="2025-09-13T00:04:08.516666012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:04:08.517443 containerd[1473]: time="2025-09-13T00:04:08.516871652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:04:08.517443 containerd[1473]: time="2025-09-13T00:04:08.517337052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:04:08.518591 containerd[1473]: time="2025-09-13T00:04:08.518364692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:04:08.520112 containerd[1473]: time="2025-09-13T00:04:08.519992531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:04:08.520112 containerd[1473]: time="2025-09-13T00:04:08.520068771Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:04:08.520112 containerd[1473]: time="2025-09-13T00:04:08.520080451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:04:08.524243 containerd[1473]: time="2025-09-13T00:04:08.520167331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:04:08.539808 systemd[1]: Started cri-containerd-97d665b9c770bf1d0b70a3d41c4d32dedb10e6cb870e25f6853cbf01d747479c.scope - libcontainer container 97d665b9c770bf1d0b70a3d41c4d32dedb10e6cb870e25f6853cbf01d747479c. Sep 13 00:04:08.553815 systemd[1]: Started cri-containerd-5fa9badd06105b9070b32e017db173a571e9050b04c18f2ed869bbbfe3cad1f2.scope - libcontainer container 5fa9badd06105b9070b32e017db173a571e9050b04c18f2ed869bbbfe3cad1f2. Sep 13 00:04:08.558371 systemd[1]: Started cri-containerd-6e602a6a3dba45e6ffc9fa651593d23e574e07658bf57f10392047a9ead2fa6e.scope - libcontainer container 6e602a6a3dba45e6ffc9fa651593d23e574e07658bf57f10392047a9ead2fa6e. Sep 13 00:04:08.564860 kubelet[2222]: E0913 00:04:08.564333 2222 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://49.13.17.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-03d8b9aea3&limit=500&resourceVersion=0\": dial tcp 49.13.17.32:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 13 00:04:08.629075 containerd[1473]: time="2025-09-13T00:04:08.629031379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-n-03d8b9aea3,Uid:1b7de3a2d3a76ba4f8e0ed41950c57c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"97d665b9c770bf1d0b70a3d41c4d32dedb10e6cb870e25f6853cbf01d747479c\"" Sep 13 00:04:08.634079 containerd[1473]: time="2025-09-13T00:04:08.633829938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-n-03d8b9aea3,Uid:d037bb68a70a4e1cf67c4afc8fdc14ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e602a6a3dba45e6ffc9fa651593d23e574e07658bf57f10392047a9ead2fa6e\"" Sep 13 00:04:08.644592 containerd[1473]: time="2025-09-13T00:04:08.644402895Z" level=info msg="CreateContainer within sandbox \"6e602a6a3dba45e6ffc9fa651593d23e574e07658bf57f10392047a9ead2fa6e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:04:08.645778 containerd[1473]: time="2025-09-13T00:04:08.645356775Z" level=info msg="CreateContainer within sandbox \"97d665b9c770bf1d0b70a3d41c4d32dedb10e6cb870e25f6853cbf01d747479c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:04:08.646737 containerd[1473]: time="2025-09-13T00:04:08.646694734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-n-03d8b9aea3,Uid:3a70b6b03c3d6b569a82775d1f713534,Namespace:kube-system,Attempt:0,} returns sandbox id \"5fa9badd06105b9070b32e017db173a571e9050b04c18f2ed869bbbfe3cad1f2\"" Sep 13 00:04:08.654931 kubelet[2222]: E0913 00:04:08.654896 2222 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://49.13.17.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 49.13.17.32:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 13 00:04:08.655423 containerd[1473]: time="2025-09-13T00:04:08.655380332Z" level=info msg="CreateContainer within sandbox \"5fa9badd06105b9070b32e017db173a571e9050b04c18f2ed869bbbfe3cad1f2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:04:08.669879 containerd[1473]: time="2025-09-13T00:04:08.669820927Z" level=info msg="CreateContainer within sandbox \"97d665b9c770bf1d0b70a3d41c4d32dedb10e6cb870e25f6853cbf01d747479c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"defa8673a69040b9879c5f61c30d5e3737b7ca463a60b4062a678bdf2856fdd6\"" Sep 13 00:04:08.671061 containerd[1473]: time="2025-09-13T00:04:08.670600047Z" level=info msg="CreateContainer within sandbox \"6e602a6a3dba45e6ffc9fa651593d23e574e07658bf57f10392047a9ead2fa6e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f8b89545adc273339ee9e6f63f5f9b4cc3b41a6582691ff32c700dfc05f67427\"" Sep 13 00:04:08.671536 containerd[1473]: time="2025-09-13T00:04:08.671497327Z" level=info msg="StartContainer for \"defa8673a69040b9879c5f61c30d5e3737b7ca463a60b4062a678bdf2856fdd6\"" Sep 13 00:04:08.672172 containerd[1473]: time="2025-09-13T00:04:08.672141407Z" level=info msg="StartContainer for \"f8b89545adc273339ee9e6f63f5f9b4cc3b41a6582691ff32c700dfc05f67427\"" Sep 13 00:04:08.679835 containerd[1473]: time="2025-09-13T00:04:08.679773044Z" level=info msg="CreateContainer within sandbox \"5fa9badd06105b9070b32e017db173a571e9050b04c18f2ed869bbbfe3cad1f2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1ed07ead55cc9431b17b83a4437e68cb373a9885225f970421a8c6c4d43fd95d\"" Sep 13 00:04:08.680738 containerd[1473]: time="2025-09-13T00:04:08.680660124Z" level=info msg="StartContainer for \"1ed07ead55cc9431b17b83a4437e68cb373a9885225f970421a8c6c4d43fd95d\"" Sep 13 00:04:08.708383 systemd[1]: Started cri-containerd-f8b89545adc273339ee9e6f63f5f9b4cc3b41a6582691ff32c700dfc05f67427.scope - libcontainer container f8b89545adc273339ee9e6f63f5f9b4cc3b41a6582691ff32c700dfc05f67427. Sep 13 00:04:08.731851 kubelet[2222]: E0913 00:04:08.731166 2222 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.17.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-03d8b9aea3?timeout=10s\": dial tcp 49.13.17.32:6443: connect: connection refused" interval="1.6s" Sep 13 00:04:08.736819 systemd[1]: Started cri-containerd-defa8673a69040b9879c5f61c30d5e3737b7ca463a60b4062a678bdf2856fdd6.scope - libcontainer container defa8673a69040b9879c5f61c30d5e3737b7ca463a60b4062a678bdf2856fdd6. Sep 13 00:04:08.745085 systemd[1]: Started cri-containerd-1ed07ead55cc9431b17b83a4437e68cb373a9885225f970421a8c6c4d43fd95d.scope - libcontainer container 1ed07ead55cc9431b17b83a4437e68cb373a9885225f970421a8c6c4d43fd95d. Sep 13 00:04:08.786313 kubelet[2222]: E0913 00:04:08.786031 2222 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://49.13.17.32:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 49.13.17.32:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 13 00:04:08.799805 containerd[1473]: time="2025-09-13T00:04:08.799752289Z" level=info msg="StartContainer for \"f8b89545adc273339ee9e6f63f5f9b4cc3b41a6582691ff32c700dfc05f67427\" returns successfully" Sep 13 00:04:08.805999 containerd[1473]: time="2025-09-13T00:04:08.805425488Z" level=info msg="StartContainer for \"defa8673a69040b9879c5f61c30d5e3737b7ca463a60b4062a678bdf2856fdd6\" returns successfully" Sep 13 00:04:08.831379 containerd[1473]: time="2025-09-13T00:04:08.831331800Z" level=info msg="StartContainer for \"1ed07ead55cc9431b17b83a4437e68cb373a9885225f970421a8c6c4d43fd95d\" returns successfully" Sep 13 00:04:08.915949 kubelet[2222]: I0913 00:04:08.915547 2222 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:08.915949 kubelet[2222]: E0913 00:04:08.915930 2222 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://49.13.17.32:6443/api/v1/nodes\": dial tcp 49.13.17.32:6443: connect: connection refused" node="ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:09.371306 kubelet[2222]: E0913 00:04:09.371192 2222 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-03d8b9aea3\" not found" node="ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:09.377135 kubelet[2222]: E0913 00:04:09.377047 2222 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-03d8b9aea3\" not found" node="ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:09.381504 kubelet[2222]: E0913 00:04:09.381457 2222 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-03d8b9aea3\" not found" node="ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:10.386287 kubelet[2222]: E0913 00:04:10.386208 2222 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-03d8b9aea3\" not found" node="ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:10.387148 kubelet[2222]: E0913 00:04:10.386871 2222 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-5-n-03d8b9aea3\" not found" node="ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:10.518877 kubelet[2222]: I0913 00:04:10.518817 2222 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:11.295105 kubelet[2222]: E0913 00:04:11.295040 2222 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-5-n-03d8b9aea3\" not found" node="ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:11.300050 kubelet[2222]: E0913 00:04:11.299789 2222 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-5-n-03d8b9aea3.1864aeb512b730f4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-5-n-03d8b9aea3,UID:ci-4081-3-5-n-03d8b9aea3,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-n-03d8b9aea3,},FirstTimestamp:2025-09-13 00:04:07.307645172 +0000 UTC m=+0.941130554,LastTimestamp:2025-09-13 00:04:07.307645172 +0000 UTC m=+0.941130554,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-n-03d8b9aea3,}" Sep 13 00:04:11.302457 kubelet[2222]: I0913 00:04:11.302204 2222 apiserver.go:52] "Watching apiserver" Sep 13 00:04:11.327797 kubelet[2222]: I0913 00:04:11.327738 2222 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:04:11.369236 kubelet[2222]: I0913 00:04:11.369107 2222 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:11.425939 kubelet[2222]: I0913 00:04:11.425733 2222 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:11.442981 kubelet[2222]: E0913 00:04:11.442701 2222 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-5-n-03d8b9aea3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:11.442981 kubelet[2222]: I0913 00:04:11.442778 2222 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:11.453343 kubelet[2222]: E0913 00:04:11.453231 2222 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-5-n-03d8b9aea3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:11.453343 kubelet[2222]: I0913 00:04:11.453282 2222 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:11.460490 kubelet[2222]: E0913 00:04:11.460426 2222 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-5-n-03d8b9aea3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:12.632657 kubelet[2222]: I0913 00:04:12.631578 2222 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:13.336021 systemd[1]: Reloading requested from client PID 2503 ('systemctl') (unit session-7.scope)... Sep 13 00:04:13.336037 systemd[1]: Reloading... Sep 13 00:04:13.434570 zram_generator::config[2543]: No configuration found. Sep 13 00:04:13.556722 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:04:13.650094 systemd[1]: Reloading finished in 313 ms. Sep 13 00:04:13.704823 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:04:13.723410 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:04:13.723838 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:04:13.723912 systemd[1]: kubelet.service: Consumed 1.396s CPU time, 131.6M memory peak, 0B memory swap peak. Sep 13 00:04:13.731913 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:04:13.883989 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:04:13.896189 (kubelet)[2588]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:04:13.946923 kubelet[2588]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:04:13.947418 kubelet[2588]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:04:13.947460 kubelet[2588]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:04:13.947675 kubelet[2588]: I0913 00:04:13.947633 2588 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:04:13.957215 kubelet[2588]: I0913 00:04:13.957148 2588 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 13 00:04:13.957215 kubelet[2588]: I0913 00:04:13.957192 2588 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:04:13.959919 kubelet[2588]: I0913 00:04:13.959867 2588 server.go:956] "Client rotation is on, will bootstrap in background" Sep 13 00:04:13.962534 kubelet[2588]: I0913 00:04:13.962503 2588 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 13 00:04:13.966058 kubelet[2588]: I0913 00:04:13.966027 2588 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:04:13.975639 kubelet[2588]: E0913 00:04:13.975361 2588 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:04:13.976636 kubelet[2588]: I0913 00:04:13.975816 2588 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:04:13.978599 kubelet[2588]: I0913 00:04:13.978506 2588 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:04:13.978889 kubelet[2588]: I0913 00:04:13.978813 2588 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:04:13.979026 kubelet[2588]: I0913 00:04:13.978853 2588 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-n-03d8b9aea3","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:04:13.979193 kubelet[2588]: I0913 00:04:13.979040 2588 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:04:13.979193 kubelet[2588]: I0913 00:04:13.979050 2588 container_manager_linux.go:303] "Creating device plugin manager" Sep 13 00:04:13.979193 kubelet[2588]: I0913 00:04:13.979098 2588 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:04:13.979406 kubelet[2588]: I0913 00:04:13.979272 2588 kubelet.go:480] "Attempting to sync node with API server" Sep 13 00:04:13.979406 kubelet[2588]: I0913 00:04:13.979285 2588 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:04:13.980084 kubelet[2588]: I0913 00:04:13.980035 2588 kubelet.go:386] "Adding apiserver pod source" Sep 13 00:04:13.980084 kubelet[2588]: I0913 00:04:13.980063 2588 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:04:13.988278 kubelet[2588]: I0913 00:04:13.988244 2588 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:04:13.988966 kubelet[2588]: I0913 00:04:13.988942 2588 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 13 00:04:13.998001 kubelet[2588]: I0913 00:04:13.997879 2588 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:04:13.998001 kubelet[2588]: I0913 00:04:13.997942 2588 server.go:1289] "Started kubelet" Sep 13 00:04:14.002617 kubelet[2588]: I0913 00:04:14.002457 2588 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:04:14.012761 kubelet[2588]: I0913 00:04:14.012697 2588 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:04:14.022646 kubelet[2588]: I0913 00:04:14.022438 2588 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:04:14.023209 kubelet[2588]: I0913 00:04:14.023034 2588 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:04:14.034331 kubelet[2588]: I0913 00:04:14.034278 2588 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:04:14.037368 kubelet[2588]: I0913 00:04:14.035731 2588 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:04:14.038384 kubelet[2588]: I0913 00:04:14.038316 2588 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 13 00:04:14.038610 kubelet[2588]: I0913 00:04:14.038565 2588 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:04:14.039652 kubelet[2588]: I0913 00:04:14.039626 2588 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:04:14.039961 kubelet[2588]: I0913 00:04:14.039925 2588 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 13 00:04:14.039961 kubelet[2588]: I0913 00:04:14.039955 2588 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 13 00:04:14.040046 kubelet[2588]: I0913 00:04:14.039982 2588 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:04:14.040046 kubelet[2588]: I0913 00:04:14.039990 2588 kubelet.go:2436] "Starting kubelet main sync loop" Sep 13 00:04:14.040046 kubelet[2588]: E0913 00:04:14.040034 2588 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:04:14.045772 kubelet[2588]: I0913 00:04:14.045715 2588 server.go:317] "Adding debug handlers to kubelet server" Sep 13 00:04:14.050865 kubelet[2588]: I0913 00:04:14.050818 2588 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:04:14.056515 kubelet[2588]: I0913 00:04:14.056484 2588 factory.go:223] Registration of the containerd container factory successfully Sep 13 00:04:14.056853 kubelet[2588]: I0913 00:04:14.056767 2588 factory.go:223] Registration of the systemd container factory successfully Sep 13 00:04:14.058777 kubelet[2588]: E0913 00:04:14.057335 2588 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:04:14.114574 kubelet[2588]: I0913 00:04:14.114276 2588 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:04:14.114574 kubelet[2588]: I0913 00:04:14.114483 2588 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:04:14.114574 kubelet[2588]: I0913 00:04:14.114511 2588 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:04:14.115078 kubelet[2588]: I0913 00:04:14.114820 2588 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:04:14.115078 kubelet[2588]: I0913 00:04:14.114841 2588 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:04:14.115078 kubelet[2588]: I0913 00:04:14.114864 2588 policy_none.go:49] "None policy: Start" Sep 13 00:04:14.115078 kubelet[2588]: I0913 00:04:14.114894 2588 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:04:14.115078 kubelet[2588]: I0913 00:04:14.114906 2588 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:04:14.115078 kubelet[2588]: I0913 00:04:14.115057 2588 state_mem.go:75] "Updated machine memory state" Sep 13 00:04:14.126781 kubelet[2588]: E0913 00:04:14.125139 2588 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 13 00:04:14.126781 kubelet[2588]: I0913 00:04:14.125349 2588 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:04:14.126781 kubelet[2588]: I0913 00:04:14.125360 2588 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:04:14.126781 kubelet[2588]: I0913 00:04:14.125632 2588 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:04:14.134373 kubelet[2588]: E0913 00:04:14.134337 2588 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:04:14.140905 kubelet[2588]: I0913 00:04:14.140851 2588 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:14.142225 kubelet[2588]: I0913 00:04:14.141341 2588 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:14.145154 kubelet[2588]: I0913 00:04:14.142656 2588 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:14.156943 kubelet[2588]: E0913 00:04:14.156910 2588 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-5-n-03d8b9aea3\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:14.242905 kubelet[2588]: I0913 00:04:14.241301 2588 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:14.254413 kubelet[2588]: I0913 00:04:14.254375 2588 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:14.254574 kubelet[2588]: I0913 00:04:14.254470 2588 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:14.334304 sudo[2625]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 13 00:04:14.334807 sudo[2625]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 13 00:04:14.341181 kubelet[2588]: I0913 00:04:14.341136 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1b7de3a2d3a76ba4f8e0ed41950c57c0-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-n-03d8b9aea3\" (UID: \"1b7de3a2d3a76ba4f8e0ed41950c57c0\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:14.341181 kubelet[2588]: I0913 00:04:14.341179 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1b7de3a2d3a76ba4f8e0ed41950c57c0-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-n-03d8b9aea3\" (UID: \"1b7de3a2d3a76ba4f8e0ed41950c57c0\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:14.341350 kubelet[2588]: I0913 00:04:14.341198 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3a70b6b03c3d6b569a82775d1f713534-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-03d8b9aea3\" (UID: \"3a70b6b03c3d6b569a82775d1f713534\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:14.341350 kubelet[2588]: I0913 00:04:14.341219 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3a70b6b03c3d6b569a82775d1f713534-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-n-03d8b9aea3\" (UID: \"3a70b6b03c3d6b569a82775d1f713534\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:14.341350 kubelet[2588]: I0913 00:04:14.341241 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d037bb68a70a4e1cf67c4afc8fdc14ed-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-n-03d8b9aea3\" (UID: \"d037bb68a70a4e1cf67c4afc8fdc14ed\") " pod="kube-system/kube-scheduler-ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:14.341350 kubelet[2588]: I0913 00:04:14.341258 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1b7de3a2d3a76ba4f8e0ed41950c57c0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-n-03d8b9aea3\" (UID: \"1b7de3a2d3a76ba4f8e0ed41950c57c0\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:14.341350 kubelet[2588]: I0913 00:04:14.341295 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3a70b6b03c3d6b569a82775d1f713534-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-n-03d8b9aea3\" (UID: \"3a70b6b03c3d6b569a82775d1f713534\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:14.341485 kubelet[2588]: I0913 00:04:14.341332 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3a70b6b03c3d6b569a82775d1f713534-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-03d8b9aea3\" (UID: \"3a70b6b03c3d6b569a82775d1f713534\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:14.341485 kubelet[2588]: I0913 00:04:14.341350 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3a70b6b03c3d6b569a82775d1f713534-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-n-03d8b9aea3\" (UID: \"3a70b6b03c3d6b569a82775d1f713534\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:14.846905 sudo[2625]: pam_unix(sudo:session): session closed for user root Sep 13 00:04:14.982848 kubelet[2588]: I0913 00:04:14.982491 2588 apiserver.go:52] "Watching apiserver" Sep 13 00:04:15.039895 kubelet[2588]: I0913 00:04:15.039769 2588 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:04:15.080245 kubelet[2588]: I0913 00:04:15.080199 2588 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:15.081437 kubelet[2588]: I0913 00:04:15.080854 2588 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:15.095574 kubelet[2588]: E0913 00:04:15.094631 2588 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-5-n-03d8b9aea3\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:15.098587 kubelet[2588]: E0913 00:04:15.098448 2588 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-5-n-03d8b9aea3\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-5-n-03d8b9aea3" Sep 13 00:04:15.128087 kubelet[2588]: I0913 00:04:15.126254 2588 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-5-n-03d8b9aea3" podStartSLOduration=1.126233211 podStartE2EDuration="1.126233211s" podCreationTimestamp="2025-09-13 00:04:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:04:15.111840775 +0000 UTC m=+1.211031169" watchObservedRunningTime="2025-09-13 00:04:15.126233211 +0000 UTC m=+1.225423605" Sep 13 00:04:15.142263 kubelet[2588]: I0913 00:04:15.142139 2588 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-5-n-03d8b9aea3" podStartSLOduration=3.142122167 podStartE2EDuration="3.142122167s" podCreationTimestamp="2025-09-13 00:04:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:04:15.126668211 +0000 UTC m=+1.225858605" watchObservedRunningTime="2025-09-13 00:04:15.142122167 +0000 UTC m=+1.241312561" Sep 13 00:04:17.277617 sudo[1738]: pam_unix(sudo:session): session closed for user root Sep 13 00:04:17.439932 sshd[1735]: pam_unix(sshd:session): session closed for user core Sep 13 00:04:17.446932 systemd-logind[1452]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:04:17.447489 systemd[1]: sshd@6-49.13.17.32:22-147.75.109.163:36692.service: Deactivated successfully. Sep 13 00:04:17.450984 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:04:17.451502 systemd[1]: session-7.scope: Consumed 7.270s CPU time, 150.9M memory peak, 0B memory swap peak. Sep 13 00:04:17.452467 systemd-logind[1452]: Removed session 7. Sep 13 00:04:19.007686 kubelet[2588]: I0913 00:04:19.007138 2588 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:04:19.008492 containerd[1473]: time="2025-09-13T00:04:19.007584055Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:04:19.010082 kubelet[2588]: I0913 00:04:19.009695 2588 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:04:20.040244 kubelet[2588]: I0913 00:04:20.040170 2588 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-5-n-03d8b9aea3" podStartSLOduration=6.040154491 podStartE2EDuration="6.040154491s" podCreationTimestamp="2025-09-13 00:04:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:04:15.143358567 +0000 UTC m=+1.242548961" watchObservedRunningTime="2025-09-13 00:04:20.040154491 +0000 UTC m=+6.139344885" Sep 13 00:04:20.059719 systemd[1]: Created slice kubepods-besteffort-pod10ed733d_11f8_447d_b0c8_5abb4c70bc76.slice - libcontainer container kubepods-besteffort-pod10ed733d_11f8_447d_b0c8_5abb4c70bc76.slice. Sep 13 00:04:20.081208 systemd[1]: Created slice kubepods-burstable-podbe6b56e1_8417_4e49_a527_425e075efff1.slice - libcontainer container kubepods-burstable-podbe6b56e1_8417_4e49_a527_425e075efff1.slice. Sep 13 00:04:20.083812 kubelet[2588]: I0913 00:04:20.083076 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/be6b56e1-8417-4e49-a527-425e075efff1-host-proc-sys-net\") pod \"cilium-mlh7m\" (UID: \"be6b56e1-8417-4e49-a527-425e075efff1\") " pod="kube-system/cilium-mlh7m" Sep 13 00:04:20.083812 kubelet[2588]: I0913 00:04:20.083112 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/be6b56e1-8417-4e49-a527-425e075efff1-host-proc-sys-kernel\") pod \"cilium-mlh7m\" (UID: \"be6b56e1-8417-4e49-a527-425e075efff1\") " pod="kube-system/cilium-mlh7m" Sep 13 00:04:20.083812 kubelet[2588]: I0913 00:04:20.083133 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/10ed733d-11f8-447d-b0c8-5abb4c70bc76-lib-modules\") pod \"kube-proxy-5xckq\" (UID: \"10ed733d-11f8-447d-b0c8-5abb4c70bc76\") " pod="kube-system/kube-proxy-5xckq" Sep 13 00:04:20.083812 kubelet[2588]: I0913 00:04:20.083148 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be6b56e1-8417-4e49-a527-425e075efff1-etc-cni-netd\") pod \"cilium-mlh7m\" (UID: \"be6b56e1-8417-4e49-a527-425e075efff1\") " pod="kube-system/cilium-mlh7m" Sep 13 00:04:20.083812 kubelet[2588]: I0913 00:04:20.083163 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/be6b56e1-8417-4e49-a527-425e075efff1-hubble-tls\") pod \"cilium-mlh7m\" (UID: \"be6b56e1-8417-4e49-a527-425e075efff1\") " pod="kube-system/cilium-mlh7m" Sep 13 00:04:20.083812 kubelet[2588]: I0913 00:04:20.083179 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/10ed733d-11f8-447d-b0c8-5abb4c70bc76-kube-proxy\") pod \"kube-proxy-5xckq\" (UID: \"10ed733d-11f8-447d-b0c8-5abb4c70bc76\") " pod="kube-system/kube-proxy-5xckq" Sep 13 00:04:20.084078 kubelet[2588]: I0913 00:04:20.083195 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/be6b56e1-8417-4e49-a527-425e075efff1-cilium-run\") pod \"cilium-mlh7m\" (UID: \"be6b56e1-8417-4e49-a527-425e075efff1\") " pod="kube-system/cilium-mlh7m" Sep 13 00:04:20.084078 kubelet[2588]: I0913 00:04:20.083212 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/be6b56e1-8417-4e49-a527-425e075efff1-cilium-cgroup\") pod \"cilium-mlh7m\" (UID: \"be6b56e1-8417-4e49-a527-425e075efff1\") " pod="kube-system/cilium-mlh7m" Sep 13 00:04:20.084078 kubelet[2588]: I0913 00:04:20.083226 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/be6b56e1-8417-4e49-a527-425e075efff1-cilium-config-path\") pod \"cilium-mlh7m\" (UID: \"be6b56e1-8417-4e49-a527-425e075efff1\") " pod="kube-system/cilium-mlh7m" Sep 13 00:04:20.084078 kubelet[2588]: I0913 00:04:20.083239 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr4kr\" (UniqueName: \"kubernetes.io/projected/be6b56e1-8417-4e49-a527-425e075efff1-kube-api-access-mr4kr\") pod \"cilium-mlh7m\" (UID: \"be6b56e1-8417-4e49-a527-425e075efff1\") " pod="kube-system/cilium-mlh7m" Sep 13 00:04:20.084078 kubelet[2588]: I0913 00:04:20.083277 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/10ed733d-11f8-447d-b0c8-5abb4c70bc76-xtables-lock\") pod \"kube-proxy-5xckq\" (UID: \"10ed733d-11f8-447d-b0c8-5abb4c70bc76\") " pod="kube-system/kube-proxy-5xckq" Sep 13 00:04:20.084189 kubelet[2588]: I0913 00:04:20.083292 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5478\" (UniqueName: \"kubernetes.io/projected/10ed733d-11f8-447d-b0c8-5abb4c70bc76-kube-api-access-h5478\") pod \"kube-proxy-5xckq\" (UID: \"10ed733d-11f8-447d-b0c8-5abb4c70bc76\") " pod="kube-system/kube-proxy-5xckq" Sep 13 00:04:20.084189 kubelet[2588]: I0913 00:04:20.083307 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/be6b56e1-8417-4e49-a527-425e075efff1-bpf-maps\") pod \"cilium-mlh7m\" (UID: \"be6b56e1-8417-4e49-a527-425e075efff1\") " pod="kube-system/cilium-mlh7m" Sep 13 00:04:20.084189 kubelet[2588]: I0913 00:04:20.083323 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/be6b56e1-8417-4e49-a527-425e075efff1-cni-path\") pod \"cilium-mlh7m\" (UID: \"be6b56e1-8417-4e49-a527-425e075efff1\") " pod="kube-system/cilium-mlh7m" Sep 13 00:04:20.084189 kubelet[2588]: I0913 00:04:20.083338 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be6b56e1-8417-4e49-a527-425e075efff1-lib-modules\") pod \"cilium-mlh7m\" (UID: \"be6b56e1-8417-4e49-a527-425e075efff1\") " pod="kube-system/cilium-mlh7m" Sep 13 00:04:20.084189 kubelet[2588]: I0913 00:04:20.083400 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/be6b56e1-8417-4e49-a527-425e075efff1-hostproc\") pod \"cilium-mlh7m\" (UID: \"be6b56e1-8417-4e49-a527-425e075efff1\") " pod="kube-system/cilium-mlh7m" Sep 13 00:04:20.084189 kubelet[2588]: I0913 00:04:20.083421 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be6b56e1-8417-4e49-a527-425e075efff1-xtables-lock\") pod \"cilium-mlh7m\" (UID: \"be6b56e1-8417-4e49-a527-425e075efff1\") " pod="kube-system/cilium-mlh7m" Sep 13 00:04:20.084308 kubelet[2588]: I0913 00:04:20.083436 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/be6b56e1-8417-4e49-a527-425e075efff1-clustermesh-secrets\") pod \"cilium-mlh7m\" (UID: \"be6b56e1-8417-4e49-a527-425e075efff1\") " pod="kube-system/cilium-mlh7m" Sep 13 00:04:20.118831 systemd[1]: Created slice kubepods-besteffort-pod0f55381e_655e_45bb_869d_cf9249806e39.slice - libcontainer container kubepods-besteffort-pod0f55381e_655e_45bb_869d_cf9249806e39.slice. Sep 13 00:04:20.186578 kubelet[2588]: I0913 00:04:20.185815 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpt4x\" (UniqueName: \"kubernetes.io/projected/0f55381e-655e-45bb-869d-cf9249806e39-kube-api-access-mpt4x\") pod \"cilium-operator-6c4d7847fc-v8cxd\" (UID: \"0f55381e-655e-45bb-869d-cf9249806e39\") " pod="kube-system/cilium-operator-6c4d7847fc-v8cxd" Sep 13 00:04:20.186578 kubelet[2588]: I0913 00:04:20.185995 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0f55381e-655e-45bb-869d-cf9249806e39-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-v8cxd\" (UID: \"0f55381e-655e-45bb-869d-cf9249806e39\") " pod="kube-system/cilium-operator-6c4d7847fc-v8cxd" Sep 13 00:04:20.374012 containerd[1473]: time="2025-09-13T00:04:20.373788053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5xckq,Uid:10ed733d-11f8-447d-b0c8-5abb4c70bc76,Namespace:kube-system,Attempt:0,}" Sep 13 00:04:20.387834 containerd[1473]: time="2025-09-13T00:04:20.387311450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mlh7m,Uid:be6b56e1-8417-4e49-a527-425e075efff1,Namespace:kube-system,Attempt:0,}" Sep 13 00:04:20.408103 containerd[1473]: time="2025-09-13T00:04:20.407889565Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:04:20.408857 containerd[1473]: time="2025-09-13T00:04:20.408777525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:04:20.408857 containerd[1473]: time="2025-09-13T00:04:20.408803045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:04:20.409346 containerd[1473]: time="2025-09-13T00:04:20.409119645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:04:20.423802 containerd[1473]: time="2025-09-13T00:04:20.423426561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-v8cxd,Uid:0f55381e-655e-45bb-869d-cf9249806e39,Namespace:kube-system,Attempt:0,}" Sep 13 00:04:20.429595 containerd[1473]: time="2025-09-13T00:04:20.428333960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:04:20.429595 containerd[1473]: time="2025-09-13T00:04:20.428461840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:04:20.429595 containerd[1473]: time="2025-09-13T00:04:20.428535280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:04:20.429595 containerd[1473]: time="2025-09-13T00:04:20.428752560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:04:20.436774 systemd[1]: Started cri-containerd-053b5a6da1f48b15bcbf761db1cc16109d8048c8059b8f909e44ce5482bbc21d.scope - libcontainer container 053b5a6da1f48b15bcbf761db1cc16109d8048c8059b8f909e44ce5482bbc21d. Sep 13 00:04:20.456752 systemd[1]: Started cri-containerd-2e2d802619adb4151dc84d06117447fe5d16f0df37848ca11570dacfc8604d94.scope - libcontainer container 2e2d802619adb4151dc84d06117447fe5d16f0df37848ca11570dacfc8604d94. Sep 13 00:04:20.490060 containerd[1473]: time="2025-09-13T00:04:20.488833626Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:04:20.490060 containerd[1473]: time="2025-09-13T00:04:20.488897146Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:04:20.490060 containerd[1473]: time="2025-09-13T00:04:20.488909826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:04:20.490060 containerd[1473]: time="2025-09-13T00:04:20.488992306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:04:20.491934 containerd[1473]: time="2025-09-13T00:04:20.491704105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5xckq,Uid:10ed733d-11f8-447d-b0c8-5abb4c70bc76,Namespace:kube-system,Attempt:0,} returns sandbox id \"053b5a6da1f48b15bcbf761db1cc16109d8048c8059b8f909e44ce5482bbc21d\"" Sep 13 00:04:20.501354 containerd[1473]: time="2025-09-13T00:04:20.501186823Z" level=info msg="CreateContainer within sandbox \"053b5a6da1f48b15bcbf761db1cc16109d8048c8059b8f909e44ce5482bbc21d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:04:20.514235 containerd[1473]: time="2025-09-13T00:04:20.513980460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mlh7m,Uid:be6b56e1-8417-4e49-a527-425e075efff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e2d802619adb4151dc84d06117447fe5d16f0df37848ca11570dacfc8604d94\"" Sep 13 00:04:20.518566 containerd[1473]: time="2025-09-13T00:04:20.518444939Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 13 00:04:20.529780 systemd[1]: Started cri-containerd-03d4b064a22af82bdad2e02c19332cfd5f94f808b006096a4b417d5cc03a29c5.scope - libcontainer container 03d4b064a22af82bdad2e02c19332cfd5f94f808b006096a4b417d5cc03a29c5. Sep 13 00:04:20.540235 containerd[1473]: time="2025-09-13T00:04:20.539390294Z" level=info msg="CreateContainer within sandbox \"053b5a6da1f48b15bcbf761db1cc16109d8048c8059b8f909e44ce5482bbc21d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b92e14eff19f6078985fb1e4bbd5e2116fb0a3d45ebdb2c85e54499a7ed475e7\"" Sep 13 00:04:20.542039 containerd[1473]: time="2025-09-13T00:04:20.541991374Z" level=info msg="StartContainer for \"b92e14eff19f6078985fb1e4bbd5e2116fb0a3d45ebdb2c85e54499a7ed475e7\"" Sep 13 00:04:20.585734 containerd[1473]: time="2025-09-13T00:04:20.585604043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-v8cxd,Uid:0f55381e-655e-45bb-869d-cf9249806e39,Namespace:kube-system,Attempt:0,} returns sandbox id \"03d4b064a22af82bdad2e02c19332cfd5f94f808b006096a4b417d5cc03a29c5\"" Sep 13 00:04:20.587818 systemd[1]: Started cri-containerd-b92e14eff19f6078985fb1e4bbd5e2116fb0a3d45ebdb2c85e54499a7ed475e7.scope - libcontainer container b92e14eff19f6078985fb1e4bbd5e2116fb0a3d45ebdb2c85e54499a7ed475e7. Sep 13 00:04:20.624733 containerd[1473]: time="2025-09-13T00:04:20.624318994Z" level=info msg="StartContainer for \"b92e14eff19f6078985fb1e4bbd5e2116fb0a3d45ebdb2c85e54499a7ed475e7\" returns successfully" Sep 13 00:04:21.600887 kubelet[2588]: I0913 00:04:21.600795 2588 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5xckq" podStartSLOduration=1.600770488 podStartE2EDuration="1.600770488s" podCreationTimestamp="2025-09-13 00:04:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:04:21.119080999 +0000 UTC m=+7.218271393" watchObservedRunningTime="2025-09-13 00:04:21.600770488 +0000 UTC m=+7.699960882" Sep 13 00:04:24.493690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount896059771.mount: Deactivated successfully. Sep 13 00:04:25.960338 containerd[1473]: time="2025-09-13T00:04:25.960226152Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:04:25.962230 containerd[1473]: time="2025-09-13T00:04:25.961918912Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 13 00:04:25.963682 containerd[1473]: time="2025-09-13T00:04:25.963636431Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:04:25.966450 containerd[1473]: time="2025-09-13T00:04:25.966232511Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.447468052s" Sep 13 00:04:25.966450 containerd[1473]: time="2025-09-13T00:04:25.966296511Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 13 00:04:25.968975 containerd[1473]: time="2025-09-13T00:04:25.967863150Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 13 00:04:25.974586 containerd[1473]: time="2025-09-13T00:04:25.974481229Z" level=info msg="CreateContainer within sandbox \"2e2d802619adb4151dc84d06117447fe5d16f0df37848ca11570dacfc8604d94\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:04:26.000580 containerd[1473]: time="2025-09-13T00:04:26.000307623Z" level=info msg="CreateContainer within sandbox \"2e2d802619adb4151dc84d06117447fe5d16f0df37848ca11570dacfc8604d94\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d827d82840a0d27043e69246b8b15c1420cbab2af81a24dee365bc44c2621f8e\"" Sep 13 00:04:26.001459 containerd[1473]: time="2025-09-13T00:04:26.001309703Z" level=info msg="StartContainer for \"d827d82840a0d27043e69246b8b15c1420cbab2af81a24dee365bc44c2621f8e\"" Sep 13 00:04:26.053933 systemd[1]: Started cri-containerd-d827d82840a0d27043e69246b8b15c1420cbab2af81a24dee365bc44c2621f8e.scope - libcontainer container d827d82840a0d27043e69246b8b15c1420cbab2af81a24dee365bc44c2621f8e. Sep 13 00:04:26.084825 containerd[1473]: time="2025-09-13T00:04:26.084754285Z" level=info msg="StartContainer for \"d827d82840a0d27043e69246b8b15c1420cbab2af81a24dee365bc44c2621f8e\" returns successfully" Sep 13 00:04:26.103685 systemd[1]: cri-containerd-d827d82840a0d27043e69246b8b15c1420cbab2af81a24dee365bc44c2621f8e.scope: Deactivated successfully. Sep 13 00:04:26.337521 containerd[1473]: time="2025-09-13T00:04:26.336266550Z" level=info msg="shim disconnected" id=d827d82840a0d27043e69246b8b15c1420cbab2af81a24dee365bc44c2621f8e namespace=k8s.io Sep 13 00:04:26.337521 containerd[1473]: time="2025-09-13T00:04:26.336365670Z" level=warning msg="cleaning up after shim disconnected" id=d827d82840a0d27043e69246b8b15c1420cbab2af81a24dee365bc44c2621f8e namespace=k8s.io Sep 13 00:04:26.337521 containerd[1473]: time="2025-09-13T00:04:26.336400230Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:04:26.985933 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d827d82840a0d27043e69246b8b15c1420cbab2af81a24dee365bc44c2621f8e-rootfs.mount: Deactivated successfully. Sep 13 00:04:27.129992 containerd[1473]: time="2025-09-13T00:04:27.129947298Z" level=info msg="CreateContainer within sandbox \"2e2d802619adb4151dc84d06117447fe5d16f0df37848ca11570dacfc8604d94\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:04:27.153093 containerd[1473]: time="2025-09-13T00:04:27.152715733Z" level=info msg="CreateContainer within sandbox \"2e2d802619adb4151dc84d06117447fe5d16f0df37848ca11570dacfc8604d94\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8f4305ff5babd7784ab2cf41c896fa2007f4dfe3ad05337c385263972b104918\"" Sep 13 00:04:27.153874 containerd[1473]: time="2025-09-13T00:04:27.153839533Z" level=info msg="StartContainer for \"8f4305ff5babd7784ab2cf41c896fa2007f4dfe3ad05337c385263972b104918\"" Sep 13 00:04:27.219761 systemd[1]: Started cri-containerd-8f4305ff5babd7784ab2cf41c896fa2007f4dfe3ad05337c385263972b104918.scope - libcontainer container 8f4305ff5babd7784ab2cf41c896fa2007f4dfe3ad05337c385263972b104918. Sep 13 00:04:27.248674 containerd[1473]: time="2025-09-13T00:04:27.248243993Z" level=info msg="StartContainer for \"8f4305ff5babd7784ab2cf41c896fa2007f4dfe3ad05337c385263972b104918\" returns successfully" Sep 13 00:04:27.262514 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:04:27.262768 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:04:27.262844 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:04:27.272902 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:04:27.273963 systemd[1]: cri-containerd-8f4305ff5babd7784ab2cf41c896fa2007f4dfe3ad05337c385263972b104918.scope: Deactivated successfully. Sep 13 00:04:27.295922 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:04:27.299344 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f4305ff5babd7784ab2cf41c896fa2007f4dfe3ad05337c385263972b104918-rootfs.mount: Deactivated successfully. Sep 13 00:04:27.305079 containerd[1473]: time="2025-09-13T00:04:27.305020541Z" level=info msg="shim disconnected" id=8f4305ff5babd7784ab2cf41c896fa2007f4dfe3ad05337c385263972b104918 namespace=k8s.io Sep 13 00:04:27.305079 containerd[1473]: time="2025-09-13T00:04:27.305072061Z" level=warning msg="cleaning up after shim disconnected" id=8f4305ff5babd7784ab2cf41c896fa2007f4dfe3ad05337c385263972b104918 namespace=k8s.io Sep 13 00:04:27.305079 containerd[1473]: time="2025-09-13T00:04:27.305080461Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:04:27.960031 containerd[1473]: time="2025-09-13T00:04:27.959052760Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:04:27.961812 containerd[1473]: time="2025-09-13T00:04:27.961752039Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 13 00:04:27.963658 containerd[1473]: time="2025-09-13T00:04:27.963047719Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:04:27.965126 containerd[1473]: time="2025-09-13T00:04:27.965059679Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.997147849s" Sep 13 00:04:27.965322 containerd[1473]: time="2025-09-13T00:04:27.965293759Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 13 00:04:27.969922 containerd[1473]: time="2025-09-13T00:04:27.969867158Z" level=info msg="CreateContainer within sandbox \"03d4b064a22af82bdad2e02c19332cfd5f94f808b006096a4b417d5cc03a29c5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 13 00:04:27.991702 containerd[1473]: time="2025-09-13T00:04:27.991493033Z" level=info msg="CreateContainer within sandbox \"03d4b064a22af82bdad2e02c19332cfd5f94f808b006096a4b417d5cc03a29c5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1404941a4c7ac2e75cae9893966d6e619384c8bbd5c76f2b018459cf372f0a88\"" Sep 13 00:04:27.994288 containerd[1473]: time="2025-09-13T00:04:27.993235113Z" level=info msg="StartContainer for \"1404941a4c7ac2e75cae9893966d6e619384c8bbd5c76f2b018459cf372f0a88\"" Sep 13 00:04:28.020991 systemd[1]: run-containerd-runc-k8s.io-1404941a4c7ac2e75cae9893966d6e619384c8bbd5c76f2b018459cf372f0a88-runc.bKUcHf.mount: Deactivated successfully. Sep 13 00:04:28.033005 systemd[1]: Started cri-containerd-1404941a4c7ac2e75cae9893966d6e619384c8bbd5c76f2b018459cf372f0a88.scope - libcontainer container 1404941a4c7ac2e75cae9893966d6e619384c8bbd5c76f2b018459cf372f0a88. Sep 13 00:04:28.071229 containerd[1473]: time="2025-09-13T00:04:28.071177616Z" level=info msg="StartContainer for \"1404941a4c7ac2e75cae9893966d6e619384c8bbd5c76f2b018459cf372f0a88\" returns successfully" Sep 13 00:04:28.143680 containerd[1473]: time="2025-09-13T00:04:28.143622561Z" level=info msg="CreateContainer within sandbox \"2e2d802619adb4151dc84d06117447fe5d16f0df37848ca11570dacfc8604d94\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:04:28.177070 containerd[1473]: time="2025-09-13T00:04:28.176926754Z" level=info msg="CreateContainer within sandbox \"2e2d802619adb4151dc84d06117447fe5d16f0df37848ca11570dacfc8604d94\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6f9376a49ae5070e4a3c93d19c9fffbf42f0adaf12afaa26edb190b09ba45bdb\"" Sep 13 00:04:28.178242 containerd[1473]: time="2025-09-13T00:04:28.178201193Z" level=info msg="StartContainer for \"6f9376a49ae5070e4a3c93d19c9fffbf42f0adaf12afaa26edb190b09ba45bdb\"" Sep 13 00:04:28.211424 kubelet[2588]: I0913 00:04:28.210953 2588 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-v8cxd" podStartSLOduration=0.836885149 podStartE2EDuration="8.210932306s" podCreationTimestamp="2025-09-13 00:04:20 +0000 UTC" firstStartedPulling="2025-09-13 00:04:20.592035922 +0000 UTC m=+6.691226276" lastFinishedPulling="2025-09-13 00:04:27.966083039 +0000 UTC m=+14.065273433" observedRunningTime="2025-09-13 00:04:28.164576596 +0000 UTC m=+14.263767030" watchObservedRunningTime="2025-09-13 00:04:28.210932306 +0000 UTC m=+14.310122700" Sep 13 00:04:28.226915 systemd[1]: Started cri-containerd-6f9376a49ae5070e4a3c93d19c9fffbf42f0adaf12afaa26edb190b09ba45bdb.scope - libcontainer container 6f9376a49ae5070e4a3c93d19c9fffbf42f0adaf12afaa26edb190b09ba45bdb. Sep 13 00:04:28.266527 containerd[1473]: time="2025-09-13T00:04:28.266478175Z" level=info msg="StartContainer for \"6f9376a49ae5070e4a3c93d19c9fffbf42f0adaf12afaa26edb190b09ba45bdb\" returns successfully" Sep 13 00:04:28.274702 systemd[1]: cri-containerd-6f9376a49ae5070e4a3c93d19c9fffbf42f0adaf12afaa26edb190b09ba45bdb.scope: Deactivated successfully. Sep 13 00:04:28.359390 containerd[1473]: time="2025-09-13T00:04:28.359314875Z" level=info msg="shim disconnected" id=6f9376a49ae5070e4a3c93d19c9fffbf42f0adaf12afaa26edb190b09ba45bdb namespace=k8s.io Sep 13 00:04:28.359390 containerd[1473]: time="2025-09-13T00:04:28.359380995Z" level=warning msg="cleaning up after shim disconnected" id=6f9376a49ae5070e4a3c93d19c9fffbf42f0adaf12afaa26edb190b09ba45bdb namespace=k8s.io Sep 13 00:04:28.359390 containerd[1473]: time="2025-09-13T00:04:28.359389715Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:04:28.380870 containerd[1473]: time="2025-09-13T00:04:28.380819070Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:04:28Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 13 00:04:28.986493 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3757692448.mount: Deactivated successfully. Sep 13 00:04:29.150740 containerd[1473]: time="2025-09-13T00:04:29.150659107Z" level=info msg="CreateContainer within sandbox \"2e2d802619adb4151dc84d06117447fe5d16f0df37848ca11570dacfc8604d94\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:04:29.178655 containerd[1473]: time="2025-09-13T00:04:29.178566741Z" level=info msg="CreateContainer within sandbox \"2e2d802619adb4151dc84d06117447fe5d16f0df37848ca11570dacfc8604d94\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d359cbcb5f7a56bf5d32e3eb3582ace030ec12529cd2d2bac71a15aead83f9a5\"" Sep 13 00:04:29.180696 containerd[1473]: time="2025-09-13T00:04:29.179829940Z" level=info msg="StartContainer for \"d359cbcb5f7a56bf5d32e3eb3582ace030ec12529cd2d2bac71a15aead83f9a5\"" Sep 13 00:04:29.218830 systemd[1]: Started cri-containerd-d359cbcb5f7a56bf5d32e3eb3582ace030ec12529cd2d2bac71a15aead83f9a5.scope - libcontainer container d359cbcb5f7a56bf5d32e3eb3582ace030ec12529cd2d2bac71a15aead83f9a5. Sep 13 00:04:29.246910 systemd[1]: cri-containerd-d359cbcb5f7a56bf5d32e3eb3582ace030ec12529cd2d2bac71a15aead83f9a5.scope: Deactivated successfully. Sep 13 00:04:29.251123 containerd[1473]: time="2025-09-13T00:04:29.250881165Z" level=info msg="StartContainer for \"d359cbcb5f7a56bf5d32e3eb3582ace030ec12529cd2d2bac71a15aead83f9a5\" returns successfully" Sep 13 00:04:29.276880 containerd[1473]: time="2025-09-13T00:04:29.276793120Z" level=info msg="shim disconnected" id=d359cbcb5f7a56bf5d32e3eb3582ace030ec12529cd2d2bac71a15aead83f9a5 namespace=k8s.io Sep 13 00:04:29.278048 containerd[1473]: time="2025-09-13T00:04:29.277593760Z" level=warning msg="cleaning up after shim disconnected" id=d359cbcb5f7a56bf5d32e3eb3582ace030ec12529cd2d2bac71a15aead83f9a5 namespace=k8s.io Sep 13 00:04:29.278048 containerd[1473]: time="2025-09-13T00:04:29.277635800Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:04:29.292991 containerd[1473]: time="2025-09-13T00:04:29.292923717Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:04:29Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 13 00:04:29.991187 systemd[1]: run-containerd-runc-k8s.io-d359cbcb5f7a56bf5d32e3eb3582ace030ec12529cd2d2bac71a15aead83f9a5-runc.wmR5zS.mount: Deactivated successfully. Sep 13 00:04:29.991423 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d359cbcb5f7a56bf5d32e3eb3582ace030ec12529cd2d2bac71a15aead83f9a5-rootfs.mount: Deactivated successfully. Sep 13 00:04:30.156000 containerd[1473]: time="2025-09-13T00:04:30.155945775Z" level=info msg="CreateContainer within sandbox \"2e2d802619adb4151dc84d06117447fe5d16f0df37848ca11570dacfc8604d94\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:04:30.180034 containerd[1473]: time="2025-09-13T00:04:30.179915210Z" level=info msg="CreateContainer within sandbox \"2e2d802619adb4151dc84d06117447fe5d16f0df37848ca11570dacfc8604d94\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c8141b4335b36c0ff8dcf0293afad8c4c68b3b8609120ae53c617c1cb5c3d55b\"" Sep 13 00:04:30.182552 containerd[1473]: time="2025-09-13T00:04:30.180794889Z" level=info msg="StartContainer for \"c8141b4335b36c0ff8dcf0293afad8c4c68b3b8609120ae53c617c1cb5c3d55b\"" Sep 13 00:04:30.217749 systemd[1]: Started cri-containerd-c8141b4335b36c0ff8dcf0293afad8c4c68b3b8609120ae53c617c1cb5c3d55b.scope - libcontainer container c8141b4335b36c0ff8dcf0293afad8c4c68b3b8609120ae53c617c1cb5c3d55b. Sep 13 00:04:30.250187 containerd[1473]: time="2025-09-13T00:04:30.250056435Z" level=info msg="StartContainer for \"c8141b4335b36c0ff8dcf0293afad8c4c68b3b8609120ae53c617c1cb5c3d55b\" returns successfully" Sep 13 00:04:30.430423 kubelet[2588]: I0913 00:04:30.429627 2588 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 13 00:04:30.531473 systemd[1]: Created slice kubepods-burstable-poda1f565f5_c3c4_4d81_a21d_2503029dfcff.slice - libcontainer container kubepods-burstable-poda1f565f5_c3c4_4d81_a21d_2503029dfcff.slice. Sep 13 00:04:30.555637 systemd[1]: Created slice kubepods-burstable-pod03015ca1_744b_49cd_a0a2_eeb2b614597e.slice - libcontainer container kubepods-burstable-pod03015ca1_744b_49cd_a0a2_eeb2b614597e.slice. Sep 13 00:04:30.565347 kubelet[2588]: I0913 00:04:30.565160 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a1f565f5-c3c4-4d81-a21d-2503029dfcff-config-volume\") pod \"coredns-674b8bbfcf-t7qcj\" (UID: \"a1f565f5-c3c4-4d81-a21d-2503029dfcff\") " pod="kube-system/coredns-674b8bbfcf-t7qcj" Sep 13 00:04:30.565347 kubelet[2588]: I0913 00:04:30.565215 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cqk9\" (UniqueName: \"kubernetes.io/projected/a1f565f5-c3c4-4d81-a21d-2503029dfcff-kube-api-access-5cqk9\") pod \"coredns-674b8bbfcf-t7qcj\" (UID: \"a1f565f5-c3c4-4d81-a21d-2503029dfcff\") " pod="kube-system/coredns-674b8bbfcf-t7qcj" Sep 13 00:04:30.565347 kubelet[2588]: I0913 00:04:30.565238 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6x62\" (UniqueName: \"kubernetes.io/projected/03015ca1-744b-49cd-a0a2-eeb2b614597e-kube-api-access-q6x62\") pod \"coredns-674b8bbfcf-2tm2t\" (UID: \"03015ca1-744b-49cd-a0a2-eeb2b614597e\") " pod="kube-system/coredns-674b8bbfcf-2tm2t" Sep 13 00:04:30.565347 kubelet[2588]: I0913 00:04:30.565257 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/03015ca1-744b-49cd-a0a2-eeb2b614597e-config-volume\") pod \"coredns-674b8bbfcf-2tm2t\" (UID: \"03015ca1-744b-49cd-a0a2-eeb2b614597e\") " pod="kube-system/coredns-674b8bbfcf-2tm2t" Sep 13 00:04:30.836836 containerd[1473]: time="2025-09-13T00:04:30.836374672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t7qcj,Uid:a1f565f5-c3c4-4d81-a21d-2503029dfcff,Namespace:kube-system,Attempt:0,}" Sep 13 00:04:30.866646 containerd[1473]: time="2025-09-13T00:04:30.866582066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2tm2t,Uid:03015ca1-744b-49cd-a0a2-eeb2b614597e,Namespace:kube-system,Attempt:0,}" Sep 13 00:04:31.180822 kubelet[2588]: I0913 00:04:31.180433 2588 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mlh7m" podStartSLOduration=5.73032603 podStartE2EDuration="11.180410481s" podCreationTimestamp="2025-09-13 00:04:20 +0000 UTC" firstStartedPulling="2025-09-13 00:04:20.517501979 +0000 UTC m=+6.616692373" lastFinishedPulling="2025-09-13 00:04:25.96758643 +0000 UTC m=+12.066776824" observedRunningTime="2025-09-13 00:04:31.178364881 +0000 UTC m=+17.277555275" watchObservedRunningTime="2025-09-13 00:04:31.180410481 +0000 UTC m=+17.279600875" Sep 13 00:04:32.622097 systemd-networkd[1361]: cilium_host: Link UP Sep 13 00:04:32.623402 systemd-networkd[1361]: cilium_net: Link UP Sep 13 00:04:32.624813 systemd-networkd[1361]: cilium_net: Gained carrier Sep 13 00:04:32.625018 systemd-networkd[1361]: cilium_host: Gained carrier Sep 13 00:04:32.763245 systemd-networkd[1361]: cilium_vxlan: Link UP Sep 13 00:04:32.763255 systemd-networkd[1361]: cilium_vxlan: Gained carrier Sep 13 00:04:33.091665 kernel: NET: Registered PF_ALG protocol family Sep 13 00:04:33.343907 systemd-networkd[1361]: cilium_host: Gained IPv6LL Sep 13 00:04:33.405859 systemd-networkd[1361]: cilium_net: Gained IPv6LL Sep 13 00:04:33.847775 systemd-networkd[1361]: lxc_health: Link UP Sep 13 00:04:33.869429 systemd-networkd[1361]: lxc_health: Gained carrier Sep 13 00:04:34.405933 systemd-networkd[1361]: lxc03bc8afa015e: Link UP Sep 13 00:04:34.413608 kernel: eth0: renamed from tmpf2764 Sep 13 00:04:34.417059 systemd-networkd[1361]: lxc03bc8afa015e: Gained carrier Sep 13 00:04:34.446937 systemd-networkd[1361]: lxc437971ccc3cf: Link UP Sep 13 00:04:34.459894 kernel: eth0: renamed from tmpa137a Sep 13 00:04:34.464706 systemd-networkd[1361]: lxc437971ccc3cf: Gained carrier Sep 13 00:04:34.622202 systemd-networkd[1361]: cilium_vxlan: Gained IPv6LL Sep 13 00:04:35.710768 systemd-networkd[1361]: lxc_health: Gained IPv6LL Sep 13 00:04:35.773822 systemd-networkd[1361]: lxc437971ccc3cf: Gained IPv6LL Sep 13 00:04:35.901962 systemd-networkd[1361]: lxc03bc8afa015e: Gained IPv6LL Sep 13 00:04:38.632364 containerd[1473]: time="2025-09-13T00:04:38.632226203Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:04:38.632364 containerd[1473]: time="2025-09-13T00:04:38.632293726Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:04:38.632364 containerd[1473]: time="2025-09-13T00:04:38.632306527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:04:38.634679 containerd[1473]: time="2025-09-13T00:04:38.632406371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:04:38.670781 systemd[1]: Started cri-containerd-f2764a480f78970ad6313bfcb0a9a881aa8daa12f5e10e604fdaba55da06def7.scope - libcontainer container f2764a480f78970ad6313bfcb0a9a881aa8daa12f5e10e604fdaba55da06def7. Sep 13 00:04:38.708404 containerd[1473]: time="2025-09-13T00:04:38.708201501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:04:38.708404 containerd[1473]: time="2025-09-13T00:04:38.708291145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:04:38.708404 containerd[1473]: time="2025-09-13T00:04:38.708304225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:04:38.709434 containerd[1473]: time="2025-09-13T00:04:38.708605198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:04:38.730830 systemd[1]: run-containerd-runc-k8s.io-a137ae6eb8aa0f0f297edddee2d26a2ce891fe92b964e20a9592dde3f10b0e80-runc.ps3T3n.mount: Deactivated successfully. Sep 13 00:04:38.742795 systemd[1]: Started cri-containerd-a137ae6eb8aa0f0f297edddee2d26a2ce891fe92b964e20a9592dde3f10b0e80.scope - libcontainer container a137ae6eb8aa0f0f297edddee2d26a2ce891fe92b964e20a9592dde3f10b0e80. Sep 13 00:04:38.763268 containerd[1473]: time="2025-09-13T00:04:38.763213944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t7qcj,Uid:a1f565f5-c3c4-4d81-a21d-2503029dfcff,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2764a480f78970ad6313bfcb0a9a881aa8daa12f5e10e604fdaba55da06def7\"" Sep 13 00:04:38.776797 containerd[1473]: time="2025-09-13T00:04:38.776748056Z" level=info msg="CreateContainer within sandbox \"f2764a480f78970ad6313bfcb0a9a881aa8daa12f5e10e604fdaba55da06def7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:04:38.804742 containerd[1473]: time="2025-09-13T00:04:38.804659234Z" level=info msg="CreateContainer within sandbox \"f2764a480f78970ad6313bfcb0a9a881aa8daa12f5e10e604fdaba55da06def7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dd7f981e45d6e06122b6049f9d4297af1033a6ecf23387bf3aeb99e27cd04238\"" Sep 13 00:04:38.807639 containerd[1473]: time="2025-09-13T00:04:38.807123494Z" level=info msg="StartContainer for \"dd7f981e45d6e06122b6049f9d4297af1033a6ecf23387bf3aeb99e27cd04238\"" Sep 13 00:04:38.822074 containerd[1473]: time="2025-09-13T00:04:38.822024702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2tm2t,Uid:03015ca1-744b-49cd-a0a2-eeb2b614597e,Namespace:kube-system,Attempt:0,} returns sandbox id \"a137ae6eb8aa0f0f297edddee2d26a2ce891fe92b964e20a9592dde3f10b0e80\"" Sep 13 00:04:38.830707 containerd[1473]: time="2025-09-13T00:04:38.830601052Z" level=info msg="CreateContainer within sandbox \"a137ae6eb8aa0f0f297edddee2d26a2ce891fe92b964e20a9592dde3f10b0e80\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:04:38.863776 systemd[1]: Started cri-containerd-dd7f981e45d6e06122b6049f9d4297af1033a6ecf23387bf3aeb99e27cd04238.scope - libcontainer container dd7f981e45d6e06122b6049f9d4297af1033a6ecf23387bf3aeb99e27cd04238. Sep 13 00:04:38.867160 containerd[1473]: time="2025-09-13T00:04:38.867007816Z" level=info msg="CreateContainer within sandbox \"a137ae6eb8aa0f0f297edddee2d26a2ce891fe92b964e20a9592dde3f10b0e80\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"15f36e27f89aa2bb0a540b077d6ca5be741c363333c562ab9554990f9b62b2b1\"" Sep 13 00:04:38.869059 containerd[1473]: time="2025-09-13T00:04:38.868792569Z" level=info msg="StartContainer for \"15f36e27f89aa2bb0a540b077d6ca5be741c363333c562ab9554990f9b62b2b1\"" Sep 13 00:04:38.906796 systemd[1]: Started cri-containerd-15f36e27f89aa2bb0a540b077d6ca5be741c363333c562ab9554990f9b62b2b1.scope - libcontainer container 15f36e27f89aa2bb0a540b077d6ca5be741c363333c562ab9554990f9b62b2b1. Sep 13 00:04:38.915750 containerd[1473]: time="2025-09-13T00:04:38.915071176Z" level=info msg="StartContainer for \"dd7f981e45d6e06122b6049f9d4297af1033a6ecf23387bf3aeb99e27cd04238\" returns successfully" Sep 13 00:04:38.949049 containerd[1473]: time="2025-09-13T00:04:38.948991959Z" level=info msg="StartContainer for \"15f36e27f89aa2bb0a540b077d6ca5be741c363333c562ab9554990f9b62b2b1\" returns successfully" Sep 13 00:04:39.211660 kubelet[2588]: I0913 00:04:39.210699 2588 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-t7qcj" podStartSLOduration=19.210674835 podStartE2EDuration="19.210674835s" podCreationTimestamp="2025-09-13 00:04:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:04:39.194367588 +0000 UTC m=+25.293558022" watchObservedRunningTime="2025-09-13 00:04:39.210674835 +0000 UTC m=+25.309865229" Sep 13 00:04:39.239772 kubelet[2588]: I0913 00:04:39.239220 2588 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-2tm2t" podStartSLOduration=19.239200366 podStartE2EDuration="19.239200366s" podCreationTimestamp="2025-09-13 00:04:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:04:39.239125483 +0000 UTC m=+25.338315917" watchObservedRunningTime="2025-09-13 00:04:39.239200366 +0000 UTC m=+25.338390760" Sep 13 00:06:39.682999 systemd[1]: Started sshd@7-49.13.17.32:22-147.75.109.163:35608.service - OpenSSH per-connection server daemon (147.75.109.163:35608). Sep 13 00:06:40.658711 sshd[3984]: Accepted publickey for core from 147.75.109.163 port 35608 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:06:40.660430 sshd[3984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:40.666430 systemd-logind[1452]: New session 8 of user core. Sep 13 00:06:40.673851 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 00:06:41.432334 sshd[3984]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:41.437429 systemd-logind[1452]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:06:41.438410 systemd[1]: sshd@7-49.13.17.32:22-147.75.109.163:35608.service: Deactivated successfully. Sep 13 00:06:41.442109 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:06:41.443616 systemd-logind[1452]: Removed session 8. Sep 13 00:06:41.469994 systemd[1]: Started sshd@8-49.13.17.32:22-80.94.95.115:37572.service - OpenSSH per-connection server daemon (80.94.95.115:37572). Sep 13 00:06:45.831508 sshd[3998]: Connection closed by authenticating user root 80.94.95.115 port 37572 [preauth] Sep 13 00:06:45.833337 systemd[1]: sshd@8-49.13.17.32:22-80.94.95.115:37572.service: Deactivated successfully. Sep 13 00:06:46.615081 systemd[1]: Started sshd@9-49.13.17.32:22-147.75.109.163:36996.service - OpenSSH per-connection server daemon (147.75.109.163:36996). Sep 13 00:06:47.603887 sshd[4003]: Accepted publickey for core from 147.75.109.163 port 36996 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:06:47.606163 sshd[4003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:47.612755 systemd-logind[1452]: New session 9 of user core. Sep 13 00:06:47.617908 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 00:06:48.383879 sshd[4003]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:48.389676 systemd[1]: sshd@9-49.13.17.32:22-147.75.109.163:36996.service: Deactivated successfully. Sep 13 00:06:48.393852 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:06:48.396889 systemd-logind[1452]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:06:48.398233 systemd-logind[1452]: Removed session 9. Sep 13 00:06:53.563093 systemd[1]: Started sshd@10-49.13.17.32:22-147.75.109.163:34104.service - OpenSSH per-connection server daemon (147.75.109.163:34104). Sep 13 00:06:54.545201 sshd[4019]: Accepted publickey for core from 147.75.109.163 port 34104 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:06:54.547854 sshd[4019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:54.553724 systemd-logind[1452]: New session 10 of user core. Sep 13 00:06:54.560923 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 00:06:55.299497 sshd[4019]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:55.304411 systemd[1]: sshd@10-49.13.17.32:22-147.75.109.163:34104.service: Deactivated successfully. Sep 13 00:06:55.306875 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:06:55.308116 systemd-logind[1452]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:06:55.309204 systemd-logind[1452]: Removed session 10. Sep 13 00:06:55.478186 systemd[1]: Started sshd@11-49.13.17.32:22-147.75.109.163:34112.service - OpenSSH per-connection server daemon (147.75.109.163:34112). Sep 13 00:06:56.465994 sshd[4033]: Accepted publickey for core from 147.75.109.163 port 34112 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:06:56.468504 sshd[4033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:56.475515 systemd-logind[1452]: New session 11 of user core. Sep 13 00:06:56.484222 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 00:06:57.273311 sshd[4033]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:57.278463 systemd[1]: sshd@11-49.13.17.32:22-147.75.109.163:34112.service: Deactivated successfully. Sep 13 00:06:57.281431 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:06:57.283769 systemd-logind[1452]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:06:57.285500 systemd-logind[1452]: Removed session 11. Sep 13 00:06:57.445949 systemd[1]: Started sshd@12-49.13.17.32:22-147.75.109.163:34116.service - OpenSSH per-connection server daemon (147.75.109.163:34116). Sep 13 00:06:58.441160 sshd[4044]: Accepted publickey for core from 147.75.109.163 port 34116 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:06:58.442988 sshd[4044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:58.447932 systemd-logind[1452]: New session 12 of user core. Sep 13 00:06:58.461299 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 00:06:59.201456 sshd[4044]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:59.206935 systemd[1]: sshd@12-49.13.17.32:22-147.75.109.163:34116.service: Deactivated successfully. Sep 13 00:06:59.209743 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:06:59.211084 systemd-logind[1452]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:06:59.212436 systemd-logind[1452]: Removed session 12. Sep 13 00:06:59.880451 update_engine[1453]: I20250913 00:06:59.879699 1453 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 13 00:06:59.880451 update_engine[1453]: I20250913 00:06:59.879784 1453 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 13 00:06:59.880451 update_engine[1453]: I20250913 00:06:59.880085 1453 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 13 00:06:59.881187 update_engine[1453]: I20250913 00:06:59.880910 1453 omaha_request_params.cc:62] Current group set to lts Sep 13 00:06:59.881187 update_engine[1453]: I20250913 00:06:59.881084 1453 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 13 00:06:59.881187 update_engine[1453]: I20250913 00:06:59.881103 1453 update_attempter.cc:643] Scheduling an action processor start. Sep 13 00:06:59.881187 update_engine[1453]: I20250913 00:06:59.881129 1453 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 13 00:06:59.881312 update_engine[1453]: I20250913 00:06:59.881203 1453 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 13 00:06:59.882120 update_engine[1453]: I20250913 00:06:59.881347 1453 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 13 00:06:59.882120 update_engine[1453]: I20250913 00:06:59.881376 1453 omaha_request_action.cc:272] Request: Sep 13 00:06:59.882120 update_engine[1453]: Sep 13 00:06:59.882120 update_engine[1453]: Sep 13 00:06:59.882120 update_engine[1453]: Sep 13 00:06:59.882120 update_engine[1453]: Sep 13 00:06:59.882120 update_engine[1453]: Sep 13 00:06:59.882120 update_engine[1453]: Sep 13 00:06:59.882120 update_engine[1453]: Sep 13 00:06:59.882120 update_engine[1453]: Sep 13 00:06:59.882120 update_engine[1453]: I20250913 00:06:59.881407 1453 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:06:59.882486 locksmithd[1494]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 13 00:06:59.884070 update_engine[1453]: I20250913 00:06:59.884012 1453 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:06:59.884509 update_engine[1453]: I20250913 00:06:59.884418 1453 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:06:59.887512 update_engine[1453]: E20250913 00:06:59.887425 1453 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:06:59.887677 update_engine[1453]: I20250913 00:06:59.887555 1453 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 13 00:07:04.380890 systemd[1]: Started sshd@13-49.13.17.32:22-147.75.109.163:38834.service - OpenSSH per-connection server daemon (147.75.109.163:38834). Sep 13 00:07:05.365080 sshd[4057]: Accepted publickey for core from 147.75.109.163 port 38834 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:07:05.367731 sshd[4057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:05.374856 systemd-logind[1452]: New session 13 of user core. Sep 13 00:07:05.380827 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 00:07:06.129315 sshd[4057]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:06.135780 systemd-logind[1452]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:07:06.135999 systemd[1]: sshd@13-49.13.17.32:22-147.75.109.163:38834.service: Deactivated successfully. Sep 13 00:07:06.139746 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:07:06.141928 systemd-logind[1452]: Removed session 13. Sep 13 00:07:06.306273 systemd[1]: Started sshd@14-49.13.17.32:22-147.75.109.163:38840.service - OpenSSH per-connection server daemon (147.75.109.163:38840). Sep 13 00:07:07.282715 sshd[4069]: Accepted publickey for core from 147.75.109.163 port 38840 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:07:07.285119 sshd[4069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:07.291520 systemd-logind[1452]: New session 14 of user core. Sep 13 00:07:07.300030 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 00:07:08.088029 sshd[4069]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:08.093508 systemd[1]: sshd@14-49.13.17.32:22-147.75.109.163:38840.service: Deactivated successfully. Sep 13 00:07:08.096166 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:07:08.097313 systemd-logind[1452]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:07:08.099361 systemd-logind[1452]: Removed session 14. Sep 13 00:07:08.276272 systemd[1]: Started sshd@15-49.13.17.32:22-147.75.109.163:38854.service - OpenSSH per-connection server daemon (147.75.109.163:38854). Sep 13 00:07:09.320901 sshd[4080]: Accepted publickey for core from 147.75.109.163 port 38854 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:07:09.324406 sshd[4080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:09.331301 systemd-logind[1452]: New session 15 of user core. Sep 13 00:07:09.338802 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 00:07:09.879285 update_engine[1453]: I20250913 00:07:09.879213 1453 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:07:09.879680 update_engine[1453]: I20250913 00:07:09.879474 1453 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:07:09.879745 update_engine[1453]: I20250913 00:07:09.879712 1453 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:07:09.881036 update_engine[1453]: E20250913 00:07:09.880999 1453 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:07:09.881099 update_engine[1453]: I20250913 00:07:09.881074 1453 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 13 00:07:10.524495 sshd[4080]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:10.529415 systemd-logind[1452]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:07:10.529495 systemd[1]: sshd@15-49.13.17.32:22-147.75.109.163:38854.service: Deactivated successfully. Sep 13 00:07:10.533657 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:07:10.536443 systemd-logind[1452]: Removed session 15. Sep 13 00:07:10.711201 systemd[1]: Started sshd@16-49.13.17.32:22-147.75.109.163:56498.service - OpenSSH per-connection server daemon (147.75.109.163:56498). Sep 13 00:07:11.704708 sshd[4098]: Accepted publickey for core from 147.75.109.163 port 56498 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:07:11.707184 sshd[4098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:11.713432 systemd-logind[1452]: New session 16 of user core. Sep 13 00:07:11.716770 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 00:07:12.601488 sshd[4098]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:12.606812 systemd[1]: sshd@16-49.13.17.32:22-147.75.109.163:56498.service: Deactivated successfully. Sep 13 00:07:12.610455 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:07:12.611376 systemd-logind[1452]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:07:12.612938 systemd-logind[1452]: Removed session 16. Sep 13 00:07:12.771940 systemd[1]: Started sshd@17-49.13.17.32:22-147.75.109.163:56500.service - OpenSSH per-connection server daemon (147.75.109.163:56500). Sep 13 00:07:13.744367 sshd[4109]: Accepted publickey for core from 147.75.109.163 port 56500 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:07:13.746751 sshd[4109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:13.752902 systemd-logind[1452]: New session 17 of user core. Sep 13 00:07:13.756953 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 00:07:14.504596 sshd[4109]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:14.509843 systemd[1]: sshd@17-49.13.17.32:22-147.75.109.163:56500.service: Deactivated successfully. Sep 13 00:07:14.512666 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:07:14.514199 systemd-logind[1452]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:07:14.516100 systemd-logind[1452]: Removed session 17. Sep 13 00:07:19.685041 systemd[1]: Started sshd@18-49.13.17.32:22-147.75.109.163:56508.service - OpenSSH per-connection server daemon (147.75.109.163:56508). Sep 13 00:07:19.884688 update_engine[1453]: I20250913 00:07:19.884023 1453 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:07:19.884688 update_engine[1453]: I20250913 00:07:19.884398 1453 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:07:19.885368 update_engine[1453]: I20250913 00:07:19.884758 1453 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:07:19.885705 update_engine[1453]: E20250913 00:07:19.885648 1453 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:07:19.885762 update_engine[1453]: I20250913 00:07:19.885740 1453 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 13 00:07:20.669056 sshd[4126]: Accepted publickey for core from 147.75.109.163 port 56508 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:07:20.671716 sshd[4126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:20.676868 systemd-logind[1452]: New session 18 of user core. Sep 13 00:07:20.684861 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 00:07:21.423931 sshd[4126]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:21.430505 systemd[1]: sshd@18-49.13.17.32:22-147.75.109.163:56508.service: Deactivated successfully. Sep 13 00:07:21.434939 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:07:21.436979 systemd-logind[1452]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:07:21.438222 systemd-logind[1452]: Removed session 18. Sep 13 00:07:26.600701 systemd[1]: Started sshd@19-49.13.17.32:22-147.75.109.163:37328.service - OpenSSH per-connection server daemon (147.75.109.163:37328). Sep 13 00:07:27.590387 sshd[4142]: Accepted publickey for core from 147.75.109.163 port 37328 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:07:27.592570 sshd[4142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:27.599821 systemd-logind[1452]: New session 19 of user core. Sep 13 00:07:27.604904 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 00:07:28.354162 sshd[4142]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:28.358397 systemd[1]: sshd@19-49.13.17.32:22-147.75.109.163:37328.service: Deactivated successfully. Sep 13 00:07:28.365319 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:07:28.368458 systemd-logind[1452]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:07:28.369966 systemd-logind[1452]: Removed session 19. Sep 13 00:07:28.536327 systemd[1]: Started sshd@20-49.13.17.32:22-147.75.109.163:37342.service - OpenSSH per-connection server daemon (147.75.109.163:37342). Sep 13 00:07:29.531384 sshd[4154]: Accepted publickey for core from 147.75.109.163 port 37342 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:07:29.532623 sshd[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:29.537284 systemd-logind[1452]: New session 20 of user core. Sep 13 00:07:29.541767 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 00:07:29.879789 update_engine[1453]: I20250913 00:07:29.879370 1453 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:07:29.879789 update_engine[1453]: I20250913 00:07:29.879789 1453 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:07:29.880367 update_engine[1453]: I20250913 00:07:29.880072 1453 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:07:29.881139 update_engine[1453]: E20250913 00:07:29.880952 1453 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:07:29.881139 update_engine[1453]: I20250913 00:07:29.881064 1453 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 13 00:07:29.881139 update_engine[1453]: I20250913 00:07:29.881081 1453 omaha_request_action.cc:617] Omaha request response: Sep 13 00:07:29.881426 update_engine[1453]: E20250913 00:07:29.881211 1453 omaha_request_action.cc:636] Omaha request network transfer failed. Sep 13 00:07:29.881426 update_engine[1453]: I20250913 00:07:29.881240 1453 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 13 00:07:29.881426 update_engine[1453]: I20250913 00:07:29.881251 1453 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 13 00:07:29.881426 update_engine[1453]: I20250913 00:07:29.881260 1453 update_attempter.cc:306] Processing Done. Sep 13 00:07:29.881426 update_engine[1453]: E20250913 00:07:29.881281 1453 update_attempter.cc:619] Update failed. Sep 13 00:07:29.881426 update_engine[1453]: I20250913 00:07:29.881291 1453 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 13 00:07:29.881426 update_engine[1453]: I20250913 00:07:29.881300 1453 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 13 00:07:29.881426 update_engine[1453]: I20250913 00:07:29.881309 1453 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 13 00:07:29.881426 update_engine[1453]: I20250913 00:07:29.881412 1453 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 13 00:07:29.882528 update_engine[1453]: I20250913 00:07:29.881445 1453 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 13 00:07:29.882528 update_engine[1453]: I20250913 00:07:29.881456 1453 omaha_request_action.cc:272] Request: Sep 13 00:07:29.882528 update_engine[1453]: Sep 13 00:07:29.882528 update_engine[1453]: Sep 13 00:07:29.882528 update_engine[1453]: Sep 13 00:07:29.882528 update_engine[1453]: Sep 13 00:07:29.882528 update_engine[1453]: Sep 13 00:07:29.882528 update_engine[1453]: Sep 13 00:07:29.882528 update_engine[1453]: I20250913 00:07:29.881466 1453 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:07:29.882528 update_engine[1453]: I20250913 00:07:29.881721 1453 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:07:29.882528 update_engine[1453]: I20250913 00:07:29.881968 1453 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:07:29.882891 locksmithd[1494]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 13 00:07:29.883331 locksmithd[1494]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 13 00:07:29.883378 update_engine[1453]: E20250913 00:07:29.882894 1453 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:07:29.883378 update_engine[1453]: I20250913 00:07:29.882943 1453 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 13 00:07:29.883378 update_engine[1453]: I20250913 00:07:29.882952 1453 omaha_request_action.cc:617] Omaha request response: Sep 13 00:07:29.883378 update_engine[1453]: I20250913 00:07:29.882959 1453 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 13 00:07:29.883378 update_engine[1453]: I20250913 00:07:29.882963 1453 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 13 00:07:29.883378 update_engine[1453]: I20250913 00:07:29.882967 1453 update_attempter.cc:306] Processing Done. Sep 13 00:07:29.883378 update_engine[1453]: I20250913 00:07:29.882973 1453 update_attempter.cc:310] Error event sent. Sep 13 00:07:29.883378 update_engine[1453]: I20250913 00:07:29.882982 1453 update_check_scheduler.cc:74] Next update check in 49m5s Sep 13 00:07:32.269260 containerd[1473]: time="2025-09-13T00:07:32.269192172Z" level=info msg="StopContainer for \"1404941a4c7ac2e75cae9893966d6e619384c8bbd5c76f2b018459cf372f0a88\" with timeout 30 (s)" Sep 13 00:07:32.271143 containerd[1473]: time="2025-09-13T00:07:32.271086464Z" level=info msg="Stop container \"1404941a4c7ac2e75cae9893966d6e619384c8bbd5c76f2b018459cf372f0a88\" with signal terminated" Sep 13 00:07:32.293882 systemd[1]: cri-containerd-1404941a4c7ac2e75cae9893966d6e619384c8bbd5c76f2b018459cf372f0a88.scope: Deactivated successfully. Sep 13 00:07:32.306709 containerd[1473]: time="2025-09-13T00:07:32.306657401Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:07:32.319621 containerd[1473]: time="2025-09-13T00:07:32.319505233Z" level=info msg="StopContainer for \"c8141b4335b36c0ff8dcf0293afad8c4c68b3b8609120ae53c617c1cb5c3d55b\" with timeout 2 (s)" Sep 13 00:07:32.320119 containerd[1473]: time="2025-09-13T00:07:32.320092010Z" level=info msg="Stop container \"c8141b4335b36c0ff8dcf0293afad8c4c68b3b8609120ae53c617c1cb5c3d55b\" with signal terminated" Sep 13 00:07:32.334910 systemd-networkd[1361]: lxc_health: Link DOWN Sep 13 00:07:32.334923 systemd-networkd[1361]: lxc_health: Lost carrier Sep 13 00:07:32.335991 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1404941a4c7ac2e75cae9893966d6e619384c8bbd5c76f2b018459cf372f0a88-rootfs.mount: Deactivated successfully. Sep 13 00:07:32.356027 containerd[1473]: time="2025-09-13T00:07:32.355610825Z" level=info msg="shim disconnected" id=1404941a4c7ac2e75cae9893966d6e619384c8bbd5c76f2b018459cf372f0a88 namespace=k8s.io Sep 13 00:07:32.356027 containerd[1473]: time="2025-09-13T00:07:32.355791670Z" level=warning msg="cleaning up after shim disconnected" id=1404941a4c7ac2e75cae9893966d6e619384c8bbd5c76f2b018459cf372f0a88 namespace=k8s.io Sep 13 00:07:32.356027 containerd[1473]: time="2025-09-13T00:07:32.355822191Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:07:32.358053 systemd[1]: cri-containerd-c8141b4335b36c0ff8dcf0293afad8c4c68b3b8609120ae53c617c1cb5c3d55b.scope: Deactivated successfully. Sep 13 00:07:32.358729 systemd[1]: cri-containerd-c8141b4335b36c0ff8dcf0293afad8c4c68b3b8609120ae53c617c1cb5c3d55b.scope: Consumed 7.911s CPU time. Sep 13 00:07:32.384170 containerd[1473]: time="2025-09-13T00:07:32.384098767Z" level=info msg="StopContainer for \"1404941a4c7ac2e75cae9893966d6e619384c8bbd5c76f2b018459cf372f0a88\" returns successfully" Sep 13 00:07:32.385276 containerd[1473]: time="2025-09-13T00:07:32.385195917Z" level=info msg="StopPodSandbox for \"03d4b064a22af82bdad2e02c19332cfd5f94f808b006096a4b417d5cc03a29c5\"" Sep 13 00:07:32.385506 containerd[1473]: time="2025-09-13T00:07:32.385250838Z" level=info msg="Container to stop \"1404941a4c7ac2e75cae9893966d6e619384c8bbd5c76f2b018459cf372f0a88\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:07:32.387325 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-03d4b064a22af82bdad2e02c19332cfd5f94f808b006096a4b417d5cc03a29c5-shm.mount: Deactivated successfully. Sep 13 00:07:32.398858 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c8141b4335b36c0ff8dcf0293afad8c4c68b3b8609120ae53c617c1cb5c3d55b-rootfs.mount: Deactivated successfully. Sep 13 00:07:32.401855 systemd[1]: cri-containerd-03d4b064a22af82bdad2e02c19332cfd5f94f808b006096a4b417d5cc03a29c5.scope: Deactivated successfully. Sep 13 00:07:32.405691 containerd[1473]: time="2025-09-13T00:07:32.405621638Z" level=info msg="shim disconnected" id=c8141b4335b36c0ff8dcf0293afad8c4c68b3b8609120ae53c617c1cb5c3d55b namespace=k8s.io Sep 13 00:07:32.405691 containerd[1473]: time="2025-09-13T00:07:32.405687999Z" level=warning msg="cleaning up after shim disconnected" id=c8141b4335b36c0ff8dcf0293afad8c4c68b3b8609120ae53c617c1cb5c3d55b namespace=k8s.io Sep 13 00:07:32.405691 containerd[1473]: time="2025-09-13T00:07:32.405697040Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:07:32.421147 containerd[1473]: time="2025-09-13T00:07:32.421062742Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:07:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 13 00:07:32.424878 containerd[1473]: time="2025-09-13T00:07:32.424488636Z" level=info msg="StopContainer for \"c8141b4335b36c0ff8dcf0293afad8c4c68b3b8609120ae53c617c1cb5c3d55b\" returns successfully" Sep 13 00:07:32.425018 containerd[1473]: time="2025-09-13T00:07:32.424985809Z" level=info msg="StopPodSandbox for \"2e2d802619adb4151dc84d06117447fe5d16f0df37848ca11570dacfc8604d94\"" Sep 13 00:07:32.425058 containerd[1473]: time="2025-09-13T00:07:32.425024810Z" level=info msg="Container to stop \"d827d82840a0d27043e69246b8b15c1420cbab2af81a24dee365bc44c2621f8e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:07:32.425058 containerd[1473]: time="2025-09-13T00:07:32.425036851Z" level=info msg="Container to stop \"6f9376a49ae5070e4a3c93d19c9fffbf42f0adaf12afaa26edb190b09ba45bdb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:07:32.425058 containerd[1473]: time="2025-09-13T00:07:32.425046451Z" level=info msg="Container to stop \"d359cbcb5f7a56bf5d32e3eb3582ace030ec12529cd2d2bac71a15aead83f9a5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:07:32.425123 containerd[1473]: time="2025-09-13T00:07:32.425059611Z" level=info msg="Container to stop \"8f4305ff5babd7784ab2cf41c896fa2007f4dfe3ad05337c385263972b104918\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:07:32.425123 containerd[1473]: time="2025-09-13T00:07:32.425071332Z" level=info msg="Container to stop \"c8141b4335b36c0ff8dcf0293afad8c4c68b3b8609120ae53c617c1cb5c3d55b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:07:32.428220 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2e2d802619adb4151dc84d06117447fe5d16f0df37848ca11570dacfc8604d94-shm.mount: Deactivated successfully. Sep 13 00:07:32.437257 systemd[1]: cri-containerd-2e2d802619adb4151dc84d06117447fe5d16f0df37848ca11570dacfc8604d94.scope: Deactivated successfully. Sep 13 00:07:32.448316 containerd[1473]: time="2025-09-13T00:07:32.447847837Z" level=info msg="shim disconnected" id=03d4b064a22af82bdad2e02c19332cfd5f94f808b006096a4b417d5cc03a29c5 namespace=k8s.io Sep 13 00:07:32.448316 containerd[1473]: time="2025-09-13T00:07:32.448286009Z" level=warning msg="cleaning up after shim disconnected" id=03d4b064a22af82bdad2e02c19332cfd5f94f808b006096a4b417d5cc03a29c5 namespace=k8s.io Sep 13 00:07:32.448974 containerd[1473]: time="2025-09-13T00:07:32.448758462Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:07:32.468592 containerd[1473]: time="2025-09-13T00:07:32.468284038Z" level=info msg="TearDown network for sandbox \"03d4b064a22af82bdad2e02c19332cfd5f94f808b006096a4b417d5cc03a29c5\" successfully" Sep 13 00:07:32.468592 containerd[1473]: time="2025-09-13T00:07:32.468341760Z" level=info msg="StopPodSandbox for \"03d4b064a22af82bdad2e02c19332cfd5f94f808b006096a4b417d5cc03a29c5\" returns successfully" Sep 13 00:07:32.471353 containerd[1473]: time="2025-09-13T00:07:32.471273400Z" level=info msg="shim disconnected" id=2e2d802619adb4151dc84d06117447fe5d16f0df37848ca11570dacfc8604d94 namespace=k8s.io Sep 13 00:07:32.471353 containerd[1473]: time="2025-09-13T00:07:32.471347122Z" level=warning msg="cleaning up after shim disconnected" id=2e2d802619adb4151dc84d06117447fe5d16f0df37848ca11570dacfc8604d94 namespace=k8s.io Sep 13 00:07:32.471597 containerd[1473]: time="2025-09-13T00:07:32.471360842Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:07:32.492927 containerd[1473]: time="2025-09-13T00:07:32.492609946Z" level=info msg="TearDown network for sandbox \"2e2d802619adb4151dc84d06117447fe5d16f0df37848ca11570dacfc8604d94\" successfully" Sep 13 00:07:32.492927 containerd[1473]: time="2025-09-13T00:07:32.492656987Z" level=info msg="StopPodSandbox for \"2e2d802619adb4151dc84d06117447fe5d16f0df37848ca11570dacfc8604d94\" returns successfully" Sep 13 00:07:32.618585 kubelet[2588]: I0913 00:07:32.618374 2588 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be6b56e1-8417-4e49-a527-425e075efff1-etc-cni-netd\") pod \"be6b56e1-8417-4e49-a527-425e075efff1\" (UID: \"be6b56e1-8417-4e49-a527-425e075efff1\") " Sep 13 00:07:32.618585 kubelet[2588]: I0913 00:07:32.618452 2588 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/be6b56e1-8417-4e49-a527-425e075efff1-host-proc-sys-net\") pod \"be6b56e1-8417-4e49-a527-425e075efff1\" (UID: \"be6b56e1-8417-4e49-a527-425e075efff1\") " Sep 13 00:07:32.618585 kubelet[2588]: I0913 00:07:32.618482 2588 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be6b56e1-8417-4e49-a527-425e075efff1-xtables-lock\") pod \"be6b56e1-8417-4e49-a527-425e075efff1\" (UID: \"be6b56e1-8417-4e49-a527-425e075efff1\") " Sep 13 00:07:32.618585 kubelet[2588]: I0913 00:07:32.618512 2588 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/be6b56e1-8417-4e49-a527-425e075efff1-host-proc-sys-kernel\") pod \"be6b56e1-8417-4e49-a527-425e075efff1\" (UID: \"be6b56e1-8417-4e49-a527-425e075efff1\") " Sep 13 00:07:32.618585 kubelet[2588]: I0913 00:07:32.618588 2588 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/be6b56e1-8417-4e49-a527-425e075efff1-cilium-run\") pod \"be6b56e1-8417-4e49-a527-425e075efff1\" (UID: \"be6b56e1-8417-4e49-a527-425e075efff1\") " Sep 13 00:07:32.619329 kubelet[2588]: I0913 00:07:32.618619 2588 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/be6b56e1-8417-4e49-a527-425e075efff1-cilium-cgroup\") pod \"be6b56e1-8417-4e49-a527-425e075efff1\" (UID: \"be6b56e1-8417-4e49-a527-425e075efff1\") " Sep 13 00:07:32.619329 kubelet[2588]: I0913 00:07:32.618670 2588 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/be6b56e1-8417-4e49-a527-425e075efff1-hubble-tls\") pod \"be6b56e1-8417-4e49-a527-425e075efff1\" (UID: \"be6b56e1-8417-4e49-a527-425e075efff1\") " Sep 13 00:07:32.619329 kubelet[2588]: I0913 00:07:32.618702 2588 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be6b56e1-8417-4e49-a527-425e075efff1-lib-modules\") pod \"be6b56e1-8417-4e49-a527-425e075efff1\" (UID: \"be6b56e1-8417-4e49-a527-425e075efff1\") " Sep 13 00:07:32.619329 kubelet[2588]: I0913 00:07:32.618736 2588 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0f55381e-655e-45bb-869d-cf9249806e39-cilium-config-path\") pod \"0f55381e-655e-45bb-869d-cf9249806e39\" (UID: \"0f55381e-655e-45bb-869d-cf9249806e39\") " Sep 13 00:07:32.619329 kubelet[2588]: I0913 00:07:32.618765 2588 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/be6b56e1-8417-4e49-a527-425e075efff1-bpf-maps\") pod \"be6b56e1-8417-4e49-a527-425e075efff1\" (UID: \"be6b56e1-8417-4e49-a527-425e075efff1\") " Sep 13 00:07:32.619329 kubelet[2588]: I0913 00:07:32.618799 2588 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/be6b56e1-8417-4e49-a527-425e075efff1-cilium-config-path\") pod \"be6b56e1-8417-4e49-a527-425e075efff1\" (UID: \"be6b56e1-8417-4e49-a527-425e075efff1\") " Sep 13 00:07:32.619628 kubelet[2588]: I0913 00:07:32.618831 2588 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mr4kr\" (UniqueName: \"kubernetes.io/projected/be6b56e1-8417-4e49-a527-425e075efff1-kube-api-access-mr4kr\") pod \"be6b56e1-8417-4e49-a527-425e075efff1\" (UID: \"be6b56e1-8417-4e49-a527-425e075efff1\") " Sep 13 00:07:32.619628 kubelet[2588]: I0913 00:07:32.618857 2588 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/be6b56e1-8417-4e49-a527-425e075efff1-cni-path\") pod \"be6b56e1-8417-4e49-a527-425e075efff1\" (UID: \"be6b56e1-8417-4e49-a527-425e075efff1\") " Sep 13 00:07:32.619628 kubelet[2588]: I0913 00:07:32.618887 2588 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/be6b56e1-8417-4e49-a527-425e075efff1-clustermesh-secrets\") pod \"be6b56e1-8417-4e49-a527-425e075efff1\" (UID: \"be6b56e1-8417-4e49-a527-425e075efff1\") " Sep 13 00:07:32.619628 kubelet[2588]: I0913 00:07:32.618915 2588 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/be6b56e1-8417-4e49-a527-425e075efff1-hostproc\") pod \"be6b56e1-8417-4e49-a527-425e075efff1\" (UID: \"be6b56e1-8417-4e49-a527-425e075efff1\") " Sep 13 00:07:32.619628 kubelet[2588]: I0913 00:07:32.618947 2588 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpt4x\" (UniqueName: \"kubernetes.io/projected/0f55381e-655e-45bb-869d-cf9249806e39-kube-api-access-mpt4x\") pod \"0f55381e-655e-45bb-869d-cf9249806e39\" (UID: \"0f55381e-655e-45bb-869d-cf9249806e39\") " Sep 13 00:07:32.621644 kubelet[2588]: I0913 00:07:32.621283 2588 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be6b56e1-8417-4e49-a527-425e075efff1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "be6b56e1-8417-4e49-a527-425e075efff1" (UID: "be6b56e1-8417-4e49-a527-425e075efff1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:07:32.621644 kubelet[2588]: I0913 00:07:32.621343 2588 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be6b56e1-8417-4e49-a527-425e075efff1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "be6b56e1-8417-4e49-a527-425e075efff1" (UID: "be6b56e1-8417-4e49-a527-425e075efff1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:07:32.621644 kubelet[2588]: I0913 00:07:32.621366 2588 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be6b56e1-8417-4e49-a527-425e075efff1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "be6b56e1-8417-4e49-a527-425e075efff1" (UID: "be6b56e1-8417-4e49-a527-425e075efff1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:07:32.621644 kubelet[2588]: I0913 00:07:32.621383 2588 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be6b56e1-8417-4e49-a527-425e075efff1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "be6b56e1-8417-4e49-a527-425e075efff1" (UID: "be6b56e1-8417-4e49-a527-425e075efff1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:07:32.621644 kubelet[2588]: I0913 00:07:32.621402 2588 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be6b56e1-8417-4e49-a527-425e075efff1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "be6b56e1-8417-4e49-a527-425e075efff1" (UID: "be6b56e1-8417-4e49-a527-425e075efff1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:07:32.622343 kubelet[2588]: I0913 00:07:32.621422 2588 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be6b56e1-8417-4e49-a527-425e075efff1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "be6b56e1-8417-4e49-a527-425e075efff1" (UID: "be6b56e1-8417-4e49-a527-425e075efff1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:07:32.622343 kubelet[2588]: I0913 00:07:32.621439 2588 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be6b56e1-8417-4e49-a527-425e075efff1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "be6b56e1-8417-4e49-a527-425e075efff1" (UID: "be6b56e1-8417-4e49-a527-425e075efff1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:07:32.624151 kubelet[2588]: I0913 00:07:32.623946 2588 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be6b56e1-8417-4e49-a527-425e075efff1-cni-path" (OuterVolumeSpecName: "cni-path") pod "be6b56e1-8417-4e49-a527-425e075efff1" (UID: "be6b56e1-8417-4e49-a527-425e075efff1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:07:32.624277 kubelet[2588]: I0913 00:07:32.624170 2588 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be6b56e1-8417-4e49-a527-425e075efff1-hostproc" (OuterVolumeSpecName: "hostproc") pod "be6b56e1-8417-4e49-a527-425e075efff1" (UID: "be6b56e1-8417-4e49-a527-425e075efff1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:07:32.627674 kubelet[2588]: I0913 00:07:32.627634 2588 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be6b56e1-8417-4e49-a527-425e075efff1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "be6b56e1-8417-4e49-a527-425e075efff1" (UID: "be6b56e1-8417-4e49-a527-425e075efff1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:07:32.628286 kubelet[2588]: I0913 00:07:32.627670 2588 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f55381e-655e-45bb-869d-cf9249806e39-kube-api-access-mpt4x" (OuterVolumeSpecName: "kube-api-access-mpt4x") pod "0f55381e-655e-45bb-869d-cf9249806e39" (UID: "0f55381e-655e-45bb-869d-cf9249806e39"). InnerVolumeSpecName "kube-api-access-mpt4x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:07:32.631093 kubelet[2588]: I0913 00:07:32.631046 2588 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be6b56e1-8417-4e49-a527-425e075efff1-kube-api-access-mr4kr" (OuterVolumeSpecName: "kube-api-access-mr4kr") pod "be6b56e1-8417-4e49-a527-425e075efff1" (UID: "be6b56e1-8417-4e49-a527-425e075efff1"). InnerVolumeSpecName "kube-api-access-mr4kr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:07:32.632809 kubelet[2588]: I0913 00:07:32.632417 2588 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f55381e-655e-45bb-869d-cf9249806e39-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0f55381e-655e-45bb-869d-cf9249806e39" (UID: "0f55381e-655e-45bb-869d-cf9249806e39"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:07:32.632809 kubelet[2588]: I0913 00:07:32.632426 2588 scope.go:117] "RemoveContainer" containerID="1404941a4c7ac2e75cae9893966d6e619384c8bbd5c76f2b018459cf372f0a88" Sep 13 00:07:32.633273 kubelet[2588]: I0913 00:07:32.633205 2588 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be6b56e1-8417-4e49-a527-425e075efff1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "be6b56e1-8417-4e49-a527-425e075efff1" (UID: "be6b56e1-8417-4e49-a527-425e075efff1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:07:32.638520 kubelet[2588]: I0913 00:07:32.637930 2588 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be6b56e1-8417-4e49-a527-425e075efff1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "be6b56e1-8417-4e49-a527-425e075efff1" (UID: "be6b56e1-8417-4e49-a527-425e075efff1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:07:32.638873 kubelet[2588]: I0913 00:07:32.638849 2588 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be6b56e1-8417-4e49-a527-425e075efff1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "be6b56e1-8417-4e49-a527-425e075efff1" (UID: "be6b56e1-8417-4e49-a527-425e075efff1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:07:32.639527 containerd[1473]: time="2025-09-13T00:07:32.639419856Z" level=info msg="RemoveContainer for \"1404941a4c7ac2e75cae9893966d6e619384c8bbd5c76f2b018459cf372f0a88\"" Sep 13 00:07:32.640971 systemd[1]: Removed slice kubepods-besteffort-pod0f55381e_655e_45bb_869d_cf9249806e39.slice - libcontainer container kubepods-besteffort-pod0f55381e_655e_45bb_869d_cf9249806e39.slice. Sep 13 00:07:32.650068 containerd[1473]: time="2025-09-13T00:07:32.650022947Z" level=info msg="RemoveContainer for \"1404941a4c7ac2e75cae9893966d6e619384c8bbd5c76f2b018459cf372f0a88\" returns successfully" Sep 13 00:07:32.650628 kubelet[2588]: I0913 00:07:32.650600 2588 scope.go:117] "RemoveContainer" containerID="1404941a4c7ac2e75cae9893966d6e619384c8bbd5c76f2b018459cf372f0a88" Sep 13 00:07:32.653632 containerd[1473]: time="2025-09-13T00:07:32.652365532Z" level=error msg="ContainerStatus for \"1404941a4c7ac2e75cae9893966d6e619384c8bbd5c76f2b018459cf372f0a88\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1404941a4c7ac2e75cae9893966d6e619384c8bbd5c76f2b018459cf372f0a88\": not found" Sep 13 00:07:32.654008 systemd[1]: Removed slice kubepods-burstable-podbe6b56e1_8417_4e49_a527_425e075efff1.slice - libcontainer container kubepods-burstable-podbe6b56e1_8417_4e49_a527_425e075efff1.slice. Sep 13 00:07:32.654273 systemd[1]: kubepods-burstable-podbe6b56e1_8417_4e49_a527_425e075efff1.slice: Consumed 8.000s CPU time. Sep 13 00:07:32.655458 kubelet[2588]: E0913 00:07:32.654911 2588 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1404941a4c7ac2e75cae9893966d6e619384c8bbd5c76f2b018459cf372f0a88\": not found" containerID="1404941a4c7ac2e75cae9893966d6e619384c8bbd5c76f2b018459cf372f0a88" Sep 13 00:07:32.655458 kubelet[2588]: I0913 00:07:32.654954 2588 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1404941a4c7ac2e75cae9893966d6e619384c8bbd5c76f2b018459cf372f0a88"} err="failed to get container status \"1404941a4c7ac2e75cae9893966d6e619384c8bbd5c76f2b018459cf372f0a88\": rpc error: code = NotFound desc = an error occurred when try to find container \"1404941a4c7ac2e75cae9893966d6e619384c8bbd5c76f2b018459cf372f0a88\": not found" Sep 13 00:07:32.655458 kubelet[2588]: I0913 00:07:32.654995 2588 scope.go:117] "RemoveContainer" containerID="c8141b4335b36c0ff8dcf0293afad8c4c68b3b8609120ae53c617c1cb5c3d55b" Sep 13 00:07:32.658835 containerd[1473]: time="2025-09-13T00:07:32.658776468Z" level=info msg="RemoveContainer for \"c8141b4335b36c0ff8dcf0293afad8c4c68b3b8609120ae53c617c1cb5c3d55b\"" Sep 13 00:07:32.667687 containerd[1473]: time="2025-09-13T00:07:32.667615510Z" level=info msg="RemoveContainer for \"c8141b4335b36c0ff8dcf0293afad8c4c68b3b8609120ae53c617c1cb5c3d55b\" returns successfully" Sep 13 00:07:32.668432 kubelet[2588]: I0913 00:07:32.668067 2588 scope.go:117] "RemoveContainer" containerID="d359cbcb5f7a56bf5d32e3eb3582ace030ec12529cd2d2bac71a15aead83f9a5" Sep 13 00:07:32.671318 containerd[1473]: time="2025-09-13T00:07:32.670858879Z" level=info msg="RemoveContainer for \"d359cbcb5f7a56bf5d32e3eb3582ace030ec12529cd2d2bac71a15aead83f9a5\"" Sep 13 00:07:32.675410 containerd[1473]: time="2025-09-13T00:07:32.675316242Z" level=info msg="RemoveContainer for \"d359cbcb5f7a56bf5d32e3eb3582ace030ec12529cd2d2bac71a15aead83f9a5\" returns successfully" Sep 13 00:07:32.675812 kubelet[2588]: I0913 00:07:32.675734 2588 scope.go:117] "RemoveContainer" containerID="6f9376a49ae5070e4a3c93d19c9fffbf42f0adaf12afaa26edb190b09ba45bdb" Sep 13 00:07:32.677751 containerd[1473]: time="2025-09-13T00:07:32.677712587Z" level=info msg="RemoveContainer for \"6f9376a49ae5070e4a3c93d19c9fffbf42f0adaf12afaa26edb190b09ba45bdb\"" Sep 13 00:07:32.684219 containerd[1473]: time="2025-09-13T00:07:32.684059722Z" level=info msg="RemoveContainer for \"6f9376a49ae5070e4a3c93d19c9fffbf42f0adaf12afaa26edb190b09ba45bdb\" returns successfully" Sep 13 00:07:32.684622 kubelet[2588]: I0913 00:07:32.684367 2588 scope.go:117] "RemoveContainer" containerID="8f4305ff5babd7784ab2cf41c896fa2007f4dfe3ad05337c385263972b104918" Sep 13 00:07:32.686605 containerd[1473]: time="2025-09-13T00:07:32.686280663Z" level=info msg="RemoveContainer for \"8f4305ff5babd7784ab2cf41c896fa2007f4dfe3ad05337c385263972b104918\"" Sep 13 00:07:32.691832 containerd[1473]: time="2025-09-13T00:07:32.691786134Z" level=info msg="RemoveContainer for \"8f4305ff5babd7784ab2cf41c896fa2007f4dfe3ad05337c385263972b104918\" returns successfully" Sep 13 00:07:32.692355 kubelet[2588]: I0913 00:07:32.692327 2588 scope.go:117] "RemoveContainer" containerID="d827d82840a0d27043e69246b8b15c1420cbab2af81a24dee365bc44c2621f8e" Sep 13 00:07:32.694239 containerd[1473]: time="2025-09-13T00:07:32.694045796Z" level=info msg="RemoveContainer for \"d827d82840a0d27043e69246b8b15c1420cbab2af81a24dee365bc44c2621f8e\"" Sep 13 00:07:32.699109 containerd[1473]: time="2025-09-13T00:07:32.698862088Z" level=info msg="RemoveContainer for \"d827d82840a0d27043e69246b8b15c1420cbab2af81a24dee365bc44c2621f8e\" returns successfully" Sep 13 00:07:32.699275 kubelet[2588]: I0913 00:07:32.699183 2588 scope.go:117] "RemoveContainer" containerID="c8141b4335b36c0ff8dcf0293afad8c4c68b3b8609120ae53c617c1cb5c3d55b" Sep 13 00:07:32.699642 containerd[1473]: time="2025-09-13T00:07:32.699597508Z" level=error msg="ContainerStatus for \"c8141b4335b36c0ff8dcf0293afad8c4c68b3b8609120ae53c617c1cb5c3d55b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c8141b4335b36c0ff8dcf0293afad8c4c68b3b8609120ae53c617c1cb5c3d55b\": not found" Sep 13 00:07:32.699812 kubelet[2588]: E0913 00:07:32.699772 2588 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c8141b4335b36c0ff8dcf0293afad8c4c68b3b8609120ae53c617c1cb5c3d55b\": not found" containerID="c8141b4335b36c0ff8dcf0293afad8c4c68b3b8609120ae53c617c1cb5c3d55b" Sep 13 00:07:32.699857 kubelet[2588]: I0913 00:07:32.699826 2588 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c8141b4335b36c0ff8dcf0293afad8c4c68b3b8609120ae53c617c1cb5c3d55b"} err="failed to get container status \"c8141b4335b36c0ff8dcf0293afad8c4c68b3b8609120ae53c617c1cb5c3d55b\": rpc error: code = NotFound desc = an error occurred when try to find container \"c8141b4335b36c0ff8dcf0293afad8c4c68b3b8609120ae53c617c1cb5c3d55b\": not found" Sep 13 00:07:32.699857 kubelet[2588]: I0913 00:07:32.699850 2588 scope.go:117] "RemoveContainer" containerID="d359cbcb5f7a56bf5d32e3eb3582ace030ec12529cd2d2bac71a15aead83f9a5" Sep 13 00:07:32.700150 containerd[1473]: time="2025-09-13T00:07:32.700048881Z" level=error msg="ContainerStatus for \"d359cbcb5f7a56bf5d32e3eb3582ace030ec12529cd2d2bac71a15aead83f9a5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d359cbcb5f7a56bf5d32e3eb3582ace030ec12529cd2d2bac71a15aead83f9a5\": not found" Sep 13 00:07:32.700203 kubelet[2588]: E0913 00:07:32.700175 2588 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d359cbcb5f7a56bf5d32e3eb3582ace030ec12529cd2d2bac71a15aead83f9a5\": not found" containerID="d359cbcb5f7a56bf5d32e3eb3582ace030ec12529cd2d2bac71a15aead83f9a5" Sep 13 00:07:32.700229 kubelet[2588]: I0913 00:07:32.700199 2588 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d359cbcb5f7a56bf5d32e3eb3582ace030ec12529cd2d2bac71a15aead83f9a5"} err="failed to get container status \"d359cbcb5f7a56bf5d32e3eb3582ace030ec12529cd2d2bac71a15aead83f9a5\": rpc error: code = NotFound desc = an error occurred when try to find container \"d359cbcb5f7a56bf5d32e3eb3582ace030ec12529cd2d2bac71a15aead83f9a5\": not found" Sep 13 00:07:32.700229 kubelet[2588]: I0913 00:07:32.700213 2588 scope.go:117] "RemoveContainer" containerID="6f9376a49ae5070e4a3c93d19c9fffbf42f0adaf12afaa26edb190b09ba45bdb" Sep 13 00:07:32.700714 containerd[1473]: time="2025-09-13T00:07:32.700592095Z" level=error msg="ContainerStatus for \"6f9376a49ae5070e4a3c93d19c9fffbf42f0adaf12afaa26edb190b09ba45bdb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6f9376a49ae5070e4a3c93d19c9fffbf42f0adaf12afaa26edb190b09ba45bdb\": not found" Sep 13 00:07:32.700803 kubelet[2588]: E0913 00:07:32.700745 2588 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6f9376a49ae5070e4a3c93d19c9fffbf42f0adaf12afaa26edb190b09ba45bdb\": not found" containerID="6f9376a49ae5070e4a3c93d19c9fffbf42f0adaf12afaa26edb190b09ba45bdb" Sep 13 00:07:32.700803 kubelet[2588]: I0913 00:07:32.700768 2588 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6f9376a49ae5070e4a3c93d19c9fffbf42f0adaf12afaa26edb190b09ba45bdb"} err="failed to get container status \"6f9376a49ae5070e4a3c93d19c9fffbf42f0adaf12afaa26edb190b09ba45bdb\": rpc error: code = NotFound desc = an error occurred when try to find container \"6f9376a49ae5070e4a3c93d19c9fffbf42f0adaf12afaa26edb190b09ba45bdb\": not found" Sep 13 00:07:32.700803 kubelet[2588]: I0913 00:07:32.700785 2588 scope.go:117] "RemoveContainer" containerID="8f4305ff5babd7784ab2cf41c896fa2007f4dfe3ad05337c385263972b104918" Sep 13 00:07:32.701161 kubelet[2588]: E0913 00:07:32.701114 2588 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8f4305ff5babd7784ab2cf41c896fa2007f4dfe3ad05337c385263972b104918\": not found" containerID="8f4305ff5babd7784ab2cf41c896fa2007f4dfe3ad05337c385263972b104918" Sep 13 00:07:32.701203 containerd[1473]: time="2025-09-13T00:07:32.700983826Z" level=error msg="ContainerStatus for \"8f4305ff5babd7784ab2cf41c896fa2007f4dfe3ad05337c385263972b104918\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8f4305ff5babd7784ab2cf41c896fa2007f4dfe3ad05337c385263972b104918\": not found" Sep 13 00:07:32.701231 kubelet[2588]: I0913 00:07:32.701214 2588 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8f4305ff5babd7784ab2cf41c896fa2007f4dfe3ad05337c385263972b104918"} err="failed to get container status \"8f4305ff5babd7784ab2cf41c896fa2007f4dfe3ad05337c385263972b104918\": rpc error: code = NotFound desc = an error occurred when try to find container \"8f4305ff5babd7784ab2cf41c896fa2007f4dfe3ad05337c385263972b104918\": not found" Sep 13 00:07:32.701256 kubelet[2588]: I0913 00:07:32.701235 2588 scope.go:117] "RemoveContainer" containerID="d827d82840a0d27043e69246b8b15c1420cbab2af81a24dee365bc44c2621f8e" Sep 13 00:07:32.701499 containerd[1473]: time="2025-09-13T00:07:32.701459919Z" level=error msg="ContainerStatus for \"d827d82840a0d27043e69246b8b15c1420cbab2af81a24dee365bc44c2621f8e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d827d82840a0d27043e69246b8b15c1420cbab2af81a24dee365bc44c2621f8e\": not found" Sep 13 00:07:32.701646 kubelet[2588]: E0913 00:07:32.701624 2588 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d827d82840a0d27043e69246b8b15c1420cbab2af81a24dee365bc44c2621f8e\": not found" containerID="d827d82840a0d27043e69246b8b15c1420cbab2af81a24dee365bc44c2621f8e" Sep 13 00:07:32.701750 kubelet[2588]: I0913 00:07:32.701652 2588 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d827d82840a0d27043e69246b8b15c1420cbab2af81a24dee365bc44c2621f8e"} err="failed to get container status \"d827d82840a0d27043e69246b8b15c1420cbab2af81a24dee365bc44c2621f8e\": rpc error: code = NotFound desc = an error occurred when try to find container \"d827d82840a0d27043e69246b8b15c1420cbab2af81a24dee365bc44c2621f8e\": not found" Sep 13 00:07:32.720091 kubelet[2588]: I0913 00:07:32.720025 2588 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/be6b56e1-8417-4e49-a527-425e075efff1-host-proc-sys-kernel\") on node \"ci-4081-3-5-n-03d8b9aea3\" DevicePath \"\"" Sep 13 00:07:32.720311 kubelet[2588]: I0913 00:07:32.720125 2588 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/be6b56e1-8417-4e49-a527-425e075efff1-cilium-run\") on node \"ci-4081-3-5-n-03d8b9aea3\" DevicePath \"\"" Sep 13 00:07:32.720311 kubelet[2588]: I0913 00:07:32.720173 2588 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/be6b56e1-8417-4e49-a527-425e075efff1-cilium-cgroup\") on node \"ci-4081-3-5-n-03d8b9aea3\" DevicePath \"\"" Sep 13 00:07:32.720311 kubelet[2588]: I0913 00:07:32.720194 2588 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/be6b56e1-8417-4e49-a527-425e075efff1-hubble-tls\") on node \"ci-4081-3-5-n-03d8b9aea3\" DevicePath \"\"" Sep 13 00:07:32.720311 kubelet[2588]: I0913 00:07:32.720212 2588 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be6b56e1-8417-4e49-a527-425e075efff1-lib-modules\") on node \"ci-4081-3-5-n-03d8b9aea3\" DevicePath \"\"" Sep 13 00:07:32.720311 kubelet[2588]: I0913 00:07:32.720232 2588 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0f55381e-655e-45bb-869d-cf9249806e39-cilium-config-path\") on node \"ci-4081-3-5-n-03d8b9aea3\" DevicePath \"\"" Sep 13 00:07:32.720311 kubelet[2588]: I0913 00:07:32.720253 2588 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/be6b56e1-8417-4e49-a527-425e075efff1-bpf-maps\") on node \"ci-4081-3-5-n-03d8b9aea3\" DevicePath \"\"" Sep 13 00:07:32.720311 kubelet[2588]: I0913 00:07:32.720273 2588 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/be6b56e1-8417-4e49-a527-425e075efff1-cilium-config-path\") on node \"ci-4081-3-5-n-03d8b9aea3\" DevicePath \"\"" Sep 13 00:07:32.720311 kubelet[2588]: I0913 00:07:32.720295 2588 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mr4kr\" (UniqueName: \"kubernetes.io/projected/be6b56e1-8417-4e49-a527-425e075efff1-kube-api-access-mr4kr\") on node \"ci-4081-3-5-n-03d8b9aea3\" DevicePath \"\"" Sep 13 00:07:32.720894 kubelet[2588]: I0913 00:07:32.720315 2588 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/be6b56e1-8417-4e49-a527-425e075efff1-cni-path\") on node \"ci-4081-3-5-n-03d8b9aea3\" DevicePath \"\"" Sep 13 00:07:32.720894 kubelet[2588]: I0913 00:07:32.720345 2588 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/be6b56e1-8417-4e49-a527-425e075efff1-clustermesh-secrets\") on node \"ci-4081-3-5-n-03d8b9aea3\" DevicePath \"\"" Sep 13 00:07:32.720894 kubelet[2588]: I0913 00:07:32.720364 2588 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/be6b56e1-8417-4e49-a527-425e075efff1-hostproc\") on node \"ci-4081-3-5-n-03d8b9aea3\" DevicePath \"\"" Sep 13 00:07:32.720894 kubelet[2588]: I0913 00:07:32.720383 2588 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mpt4x\" (UniqueName: \"kubernetes.io/projected/0f55381e-655e-45bb-869d-cf9249806e39-kube-api-access-mpt4x\") on node \"ci-4081-3-5-n-03d8b9aea3\" DevicePath \"\"" Sep 13 00:07:32.720894 kubelet[2588]: I0913 00:07:32.720403 2588 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be6b56e1-8417-4e49-a527-425e075efff1-etc-cni-netd\") on node \"ci-4081-3-5-n-03d8b9aea3\" DevicePath \"\"" Sep 13 00:07:32.720894 kubelet[2588]: I0913 00:07:32.720423 2588 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/be6b56e1-8417-4e49-a527-425e075efff1-host-proc-sys-net\") on node \"ci-4081-3-5-n-03d8b9aea3\" DevicePath \"\"" Sep 13 00:07:32.720894 kubelet[2588]: I0913 00:07:32.720442 2588 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be6b56e1-8417-4e49-a527-425e075efff1-xtables-lock\") on node \"ci-4081-3-5-n-03d8b9aea3\" DevicePath \"\"" Sep 13 00:07:33.284756 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03d4b064a22af82bdad2e02c19332cfd5f94f808b006096a4b417d5cc03a29c5-rootfs.mount: Deactivated successfully. Sep 13 00:07:33.284855 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e2d802619adb4151dc84d06117447fe5d16f0df37848ca11570dacfc8604d94-rootfs.mount: Deactivated successfully. Sep 13 00:07:33.284909 systemd[1]: var-lib-kubelet-pods-0f55381e\x2d655e\x2d45bb\x2d869d\x2dcf9249806e39-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmpt4x.mount: Deactivated successfully. Sep 13 00:07:33.284962 systemd[1]: var-lib-kubelet-pods-be6b56e1\x2d8417\x2d4e49\x2da527\x2d425e075efff1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmr4kr.mount: Deactivated successfully. Sep 13 00:07:33.285025 systemd[1]: var-lib-kubelet-pods-be6b56e1\x2d8417\x2d4e49\x2da527\x2d425e075efff1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:07:33.285077 systemd[1]: var-lib-kubelet-pods-be6b56e1\x2d8417\x2d4e49\x2da527\x2d425e075efff1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:07:34.046898 kubelet[2588]: I0913 00:07:34.045713 2588 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f55381e-655e-45bb-869d-cf9249806e39" path="/var/lib/kubelet/pods/0f55381e-655e-45bb-869d-cf9249806e39/volumes" Sep 13 00:07:34.046898 kubelet[2588]: I0913 00:07:34.046180 2588 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be6b56e1-8417-4e49-a527-425e075efff1" path="/var/lib/kubelet/pods/be6b56e1-8417-4e49-a527-425e075efff1/volumes" Sep 13 00:07:34.208775 kubelet[2588]: E0913 00:07:34.208718 2588 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:07:34.335944 sshd[4154]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:34.341144 systemd[1]: sshd@20-49.13.17.32:22-147.75.109.163:37342.service: Deactivated successfully. Sep 13 00:07:34.343676 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:07:34.343969 systemd[1]: session-20.scope: Consumed 1.536s CPU time. Sep 13 00:07:34.345907 systemd-logind[1452]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:07:34.347503 systemd-logind[1452]: Removed session 20. Sep 13 00:07:34.515179 systemd[1]: Started sshd@21-49.13.17.32:22-147.75.109.163:60520.service - OpenSSH per-connection server daemon (147.75.109.163:60520). Sep 13 00:07:35.506650 sshd[4314]: Accepted publickey for core from 147.75.109.163 port 60520 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:07:35.510854 sshd[4314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:35.526123 systemd-logind[1452]: New session 21 of user core. Sep 13 00:07:35.533679 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 00:07:36.940618 systemd[1]: Created slice kubepods-burstable-podac50c4d9_24f2_4d44_a2c0_ca29a77623ff.slice - libcontainer container kubepods-burstable-podac50c4d9_24f2_4d44_a2c0_ca29a77623ff.slice. Sep 13 00:07:37.051781 kubelet[2588]: I0913 00:07:37.051610 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac50c4d9-24f2-4d44-a2c0-ca29a77623ff-lib-modules\") pod \"cilium-x655x\" (UID: \"ac50c4d9-24f2-4d44-a2c0-ca29a77623ff\") " pod="kube-system/cilium-x655x" Sep 13 00:07:37.051781 kubelet[2588]: I0913 00:07:37.051703 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ac50c4d9-24f2-4d44-a2c0-ca29a77623ff-bpf-maps\") pod \"cilium-x655x\" (UID: \"ac50c4d9-24f2-4d44-a2c0-ca29a77623ff\") " pod="kube-system/cilium-x655x" Sep 13 00:07:37.051781 kubelet[2588]: I0913 00:07:37.051752 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ac50c4d9-24f2-4d44-a2c0-ca29a77623ff-cni-path\") pod \"cilium-x655x\" (UID: \"ac50c4d9-24f2-4d44-a2c0-ca29a77623ff\") " pod="kube-system/cilium-x655x" Sep 13 00:07:37.051781 kubelet[2588]: I0913 00:07:37.051796 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ac50c4d9-24f2-4d44-a2c0-ca29a77623ff-hostproc\") pod \"cilium-x655x\" (UID: \"ac50c4d9-24f2-4d44-a2c0-ca29a77623ff\") " pod="kube-system/cilium-x655x" Sep 13 00:07:37.051781 kubelet[2588]: I0913 00:07:37.051836 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzgg6\" (UniqueName: \"kubernetes.io/projected/ac50c4d9-24f2-4d44-a2c0-ca29a77623ff-kube-api-access-hzgg6\") pod \"cilium-x655x\" (UID: \"ac50c4d9-24f2-4d44-a2c0-ca29a77623ff\") " pod="kube-system/cilium-x655x" Sep 13 00:07:37.051781 kubelet[2588]: I0913 00:07:37.051950 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ac50c4d9-24f2-4d44-a2c0-ca29a77623ff-host-proc-sys-net\") pod \"cilium-x655x\" (UID: \"ac50c4d9-24f2-4d44-a2c0-ca29a77623ff\") " pod="kube-system/cilium-x655x" Sep 13 00:07:37.053492 kubelet[2588]: I0913 00:07:37.052503 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ac50c4d9-24f2-4d44-a2c0-ca29a77623ff-cilium-run\") pod \"cilium-x655x\" (UID: \"ac50c4d9-24f2-4d44-a2c0-ca29a77623ff\") " pod="kube-system/cilium-x655x" Sep 13 00:07:37.053492 kubelet[2588]: I0913 00:07:37.052650 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ac50c4d9-24f2-4d44-a2c0-ca29a77623ff-xtables-lock\") pod \"cilium-x655x\" (UID: \"ac50c4d9-24f2-4d44-a2c0-ca29a77623ff\") " pod="kube-system/cilium-x655x" Sep 13 00:07:37.053912 kubelet[2588]: I0913 00:07:37.052914 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ac50c4d9-24f2-4d44-a2c0-ca29a77623ff-cilium-ipsec-secrets\") pod \"cilium-x655x\" (UID: \"ac50c4d9-24f2-4d44-a2c0-ca29a77623ff\") " pod="kube-system/cilium-x655x" Sep 13 00:07:37.053912 kubelet[2588]: I0913 00:07:37.053678 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ac50c4d9-24f2-4d44-a2c0-ca29a77623ff-etc-cni-netd\") pod \"cilium-x655x\" (UID: \"ac50c4d9-24f2-4d44-a2c0-ca29a77623ff\") " pod="kube-system/cilium-x655x" Sep 13 00:07:37.053912 kubelet[2588]: I0913 00:07:37.053703 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ac50c4d9-24f2-4d44-a2c0-ca29a77623ff-cilium-config-path\") pod \"cilium-x655x\" (UID: \"ac50c4d9-24f2-4d44-a2c0-ca29a77623ff\") " pod="kube-system/cilium-x655x" Sep 13 00:07:37.053912 kubelet[2588]: I0913 00:07:37.053720 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ac50c4d9-24f2-4d44-a2c0-ca29a77623ff-host-proc-sys-kernel\") pod \"cilium-x655x\" (UID: \"ac50c4d9-24f2-4d44-a2c0-ca29a77623ff\") " pod="kube-system/cilium-x655x" Sep 13 00:07:37.053912 kubelet[2588]: I0913 00:07:37.053740 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ac50c4d9-24f2-4d44-a2c0-ca29a77623ff-hubble-tls\") pod \"cilium-x655x\" (UID: \"ac50c4d9-24f2-4d44-a2c0-ca29a77623ff\") " pod="kube-system/cilium-x655x" Sep 13 00:07:37.053912 kubelet[2588]: I0913 00:07:37.053778 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ac50c4d9-24f2-4d44-a2c0-ca29a77623ff-cilium-cgroup\") pod \"cilium-x655x\" (UID: \"ac50c4d9-24f2-4d44-a2c0-ca29a77623ff\") " pod="kube-system/cilium-x655x" Sep 13 00:07:37.054125 kubelet[2588]: I0913 00:07:37.053808 2588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ac50c4d9-24f2-4d44-a2c0-ca29a77623ff-clustermesh-secrets\") pod \"cilium-x655x\" (UID: \"ac50c4d9-24f2-4d44-a2c0-ca29a77623ff\") " pod="kube-system/cilium-x655x" Sep 13 00:07:37.072090 sshd[4314]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:37.078606 systemd[1]: sshd@21-49.13.17.32:22-147.75.109.163:60520.service: Deactivated successfully. Sep 13 00:07:37.082067 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:07:37.083457 systemd-logind[1452]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:07:37.085232 systemd-logind[1452]: Removed session 21. Sep 13 00:07:37.246514 containerd[1473]: time="2025-09-13T00:07:37.246305141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x655x,Uid:ac50c4d9-24f2-4d44-a2c0-ca29a77623ff,Namespace:kube-system,Attempt:0,}" Sep 13 00:07:37.255395 systemd[1]: Started sshd@22-49.13.17.32:22-147.75.109.163:60528.service - OpenSSH per-connection server daemon (147.75.109.163:60528). Sep 13 00:07:37.283616 containerd[1473]: time="2025-09-13T00:07:37.282972740Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:37.283616 containerd[1473]: time="2025-09-13T00:07:37.283061342Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:37.283616 containerd[1473]: time="2025-09-13T00:07:37.283073342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:37.283616 containerd[1473]: time="2025-09-13T00:07:37.283334869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:37.304818 systemd[1]: Started cri-containerd-76f2b0131b65c1465edf19f099310881b88067c1db7d8571a7ba17a13506ecaf.scope - libcontainer container 76f2b0131b65c1465edf19f099310881b88067c1db7d8571a7ba17a13506ecaf. Sep 13 00:07:37.339579 containerd[1473]: time="2025-09-13T00:07:37.339459656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x655x,Uid:ac50c4d9-24f2-4d44-a2c0-ca29a77623ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"76f2b0131b65c1465edf19f099310881b88067c1db7d8571a7ba17a13506ecaf\"" Sep 13 00:07:37.346102 containerd[1473]: time="2025-09-13T00:07:37.346054269Z" level=info msg="CreateContainer within sandbox \"76f2b0131b65c1465edf19f099310881b88067c1db7d8571a7ba17a13506ecaf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:07:37.361133 containerd[1473]: time="2025-09-13T00:07:37.360981939Z" level=info msg="CreateContainer within sandbox \"76f2b0131b65c1465edf19f099310881b88067c1db7d8571a7ba17a13506ecaf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9e4ef3c24449be3fd99b62fe1d6f43a24c683eaa510d3c99eff2b36e850ced1d\"" Sep 13 00:07:37.361945 containerd[1473]: time="2025-09-13T00:07:37.361889603Z" level=info msg="StartContainer for \"9e4ef3c24449be3fd99b62fe1d6f43a24c683eaa510d3c99eff2b36e850ced1d\"" Sep 13 00:07:37.398866 systemd[1]: Started cri-containerd-9e4ef3c24449be3fd99b62fe1d6f43a24c683eaa510d3c99eff2b36e850ced1d.scope - libcontainer container 9e4ef3c24449be3fd99b62fe1d6f43a24c683eaa510d3c99eff2b36e850ced1d. Sep 13 00:07:37.430570 containerd[1473]: time="2025-09-13T00:07:37.430505316Z" level=info msg="StartContainer for \"9e4ef3c24449be3fd99b62fe1d6f43a24c683eaa510d3c99eff2b36e850ced1d\" returns successfully" Sep 13 00:07:37.442261 systemd[1]: cri-containerd-9e4ef3c24449be3fd99b62fe1d6f43a24c683eaa510d3c99eff2b36e850ced1d.scope: Deactivated successfully. Sep 13 00:07:37.484971 containerd[1473]: time="2025-09-13T00:07:37.484698573Z" level=info msg="shim disconnected" id=9e4ef3c24449be3fd99b62fe1d6f43a24c683eaa510d3c99eff2b36e850ced1d namespace=k8s.io Sep 13 00:07:37.484971 containerd[1473]: time="2025-09-13T00:07:37.484779575Z" level=warning msg="cleaning up after shim disconnected" id=9e4ef3c24449be3fd99b62fe1d6f43a24c683eaa510d3c99eff2b36e850ced1d namespace=k8s.io Sep 13 00:07:37.484971 containerd[1473]: time="2025-09-13T00:07:37.484790575Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:07:37.673975 containerd[1473]: time="2025-09-13T00:07:37.673916439Z" level=info msg="CreateContainer within sandbox \"76f2b0131b65c1465edf19f099310881b88067c1db7d8571a7ba17a13506ecaf\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:07:37.688223 containerd[1473]: time="2025-09-13T00:07:37.688109850Z" level=info msg="CreateContainer within sandbox \"76f2b0131b65c1465edf19f099310881b88067c1db7d8571a7ba17a13506ecaf\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2a73d0a2ffae8198a4315c79c437fe6925bc560c5796b6746cdf5b9802a9025e\"" Sep 13 00:07:37.694591 containerd[1473]: time="2025-09-13T00:07:37.690063221Z" level=info msg="StartContainer for \"2a73d0a2ffae8198a4315c79c437fe6925bc560c5796b6746cdf5b9802a9025e\"" Sep 13 00:07:37.727912 systemd[1]: Started cri-containerd-2a73d0a2ffae8198a4315c79c437fe6925bc560c5796b6746cdf5b9802a9025e.scope - libcontainer container 2a73d0a2ffae8198a4315c79c437fe6925bc560c5796b6746cdf5b9802a9025e. Sep 13 00:07:37.762128 containerd[1473]: time="2025-09-13T00:07:37.761997422Z" level=info msg="StartContainer for \"2a73d0a2ffae8198a4315c79c437fe6925bc560c5796b6746cdf5b9802a9025e\" returns successfully" Sep 13 00:07:37.773783 systemd[1]: cri-containerd-2a73d0a2ffae8198a4315c79c437fe6925bc560c5796b6746cdf5b9802a9025e.scope: Deactivated successfully. Sep 13 00:07:37.801487 containerd[1473]: time="2025-09-13T00:07:37.801238807Z" level=info msg="shim disconnected" id=2a73d0a2ffae8198a4315c79c437fe6925bc560c5796b6746cdf5b9802a9025e namespace=k8s.io Sep 13 00:07:37.801487 containerd[1473]: time="2025-09-13T00:07:37.801306329Z" level=warning msg="cleaning up after shim disconnected" id=2a73d0a2ffae8198a4315c79c437fe6925bc560c5796b6746cdf5b9802a9025e namespace=k8s.io Sep 13 00:07:37.801487 containerd[1473]: time="2025-09-13T00:07:37.801316729Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:07:37.917517 kubelet[2588]: I0913 00:07:37.917135 2588 setters.go:618] "Node became not ready" node="ci-4081-3-5-n-03d8b9aea3" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-13T00:07:37Z","lastTransitionTime":"2025-09-13T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 13 00:07:38.256971 sshd[4329]: Accepted publickey for core from 147.75.109.163 port 60528 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:07:38.259203 sshd[4329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:38.267056 systemd-logind[1452]: New session 22 of user core. Sep 13 00:07:38.269779 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 13 00:07:38.675939 containerd[1473]: time="2025-09-13T00:07:38.675768099Z" level=info msg="CreateContainer within sandbox \"76f2b0131b65c1465edf19f099310881b88067c1db7d8571a7ba17a13506ecaf\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:07:38.698170 containerd[1473]: time="2025-09-13T00:07:38.698024835Z" level=info msg="CreateContainer within sandbox \"76f2b0131b65c1465edf19f099310881b88067c1db7d8571a7ba17a13506ecaf\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"39bea666f1663aa8ddc086802e566fa0a31910d241e61bac7dfbb69c618cf46d\"" Sep 13 00:07:38.698774 containerd[1473]: time="2025-09-13T00:07:38.698748774Z" level=info msg="StartContainer for \"39bea666f1663aa8ddc086802e566fa0a31910d241e61bac7dfbb69c618cf46d\"" Sep 13 00:07:38.754827 systemd[1]: Started cri-containerd-39bea666f1663aa8ddc086802e566fa0a31910d241e61bac7dfbb69c618cf46d.scope - libcontainer container 39bea666f1663aa8ddc086802e566fa0a31910d241e61bac7dfbb69c618cf46d. Sep 13 00:07:38.792464 containerd[1473]: time="2025-09-13T00:07:38.792332397Z" level=info msg="StartContainer for \"39bea666f1663aa8ddc086802e566fa0a31910d241e61bac7dfbb69c618cf46d\" returns successfully" Sep 13 00:07:38.796874 systemd[1]: cri-containerd-39bea666f1663aa8ddc086802e566fa0a31910d241e61bac7dfbb69c618cf46d.scope: Deactivated successfully. Sep 13 00:07:38.829953 containerd[1473]: time="2025-09-13T00:07:38.829559321Z" level=info msg="shim disconnected" id=39bea666f1663aa8ddc086802e566fa0a31910d241e61bac7dfbb69c618cf46d namespace=k8s.io Sep 13 00:07:38.829953 containerd[1473]: time="2025-09-13T00:07:38.829759526Z" level=warning msg="cleaning up after shim disconnected" id=39bea666f1663aa8ddc086802e566fa0a31910d241e61bac7dfbb69c618cf46d namespace=k8s.io Sep 13 00:07:38.829953 containerd[1473]: time="2025-09-13T00:07:38.829791047Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:07:38.947977 sshd[4329]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:38.953902 systemd[1]: sshd@22-49.13.17.32:22-147.75.109.163:60528.service: Deactivated successfully. Sep 13 00:07:38.956481 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:07:38.957708 systemd-logind[1452]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:07:38.959806 systemd-logind[1452]: Removed session 22. Sep 13 00:07:39.135048 systemd[1]: Started sshd@23-49.13.17.32:22-147.75.109.163:60530.service - OpenSSH per-connection server daemon (147.75.109.163:60530). Sep 13 00:07:39.162092 systemd[1]: run-containerd-runc-k8s.io-39bea666f1663aa8ddc086802e566fa0a31910d241e61bac7dfbb69c618cf46d-runc.Xwgx0r.mount: Deactivated successfully. Sep 13 00:07:39.162574 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39bea666f1663aa8ddc086802e566fa0a31910d241e61bac7dfbb69c618cf46d-rootfs.mount: Deactivated successfully. Sep 13 00:07:39.210722 kubelet[2588]: E0913 00:07:39.210411 2588 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:07:39.689764 containerd[1473]: time="2025-09-13T00:07:39.688079738Z" level=info msg="CreateContainer within sandbox \"76f2b0131b65c1465edf19f099310881b88067c1db7d8571a7ba17a13506ecaf\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:07:39.710031 containerd[1473]: time="2025-09-13T00:07:39.709980100Z" level=info msg="CreateContainer within sandbox \"76f2b0131b65c1465edf19f099310881b88067c1db7d8571a7ba17a13506ecaf\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6720eaa68bfd3f16df5f3670f1d2c06a840d7deee9ebb0afdbe8c5772f72a5d8\"" Sep 13 00:07:39.713566 containerd[1473]: time="2025-09-13T00:07:39.712947096Z" level=info msg="StartContainer for \"6720eaa68bfd3f16df5f3670f1d2c06a840d7deee9ebb0afdbe8c5772f72a5d8\"" Sep 13 00:07:39.746913 systemd[1]: Started cri-containerd-6720eaa68bfd3f16df5f3670f1d2c06a840d7deee9ebb0afdbe8c5772f72a5d8.scope - libcontainer container 6720eaa68bfd3f16df5f3670f1d2c06a840d7deee9ebb0afdbe8c5772f72a5d8. Sep 13 00:07:39.782142 systemd[1]: cri-containerd-6720eaa68bfd3f16df5f3670f1d2c06a840d7deee9ebb0afdbe8c5772f72a5d8.scope: Deactivated successfully. Sep 13 00:07:39.783770 containerd[1473]: time="2025-09-13T00:07:39.783656709Z" level=info msg="StartContainer for \"6720eaa68bfd3f16df5f3670f1d2c06a840d7deee9ebb0afdbe8c5772f72a5d8\" returns successfully" Sep 13 00:07:39.822060 containerd[1473]: time="2025-09-13T00:07:39.821747486Z" level=info msg="shim disconnected" id=6720eaa68bfd3f16df5f3670f1d2c06a840d7deee9ebb0afdbe8c5772f72a5d8 namespace=k8s.io Sep 13 00:07:39.822060 containerd[1473]: time="2025-09-13T00:07:39.821826328Z" level=warning msg="cleaning up after shim disconnected" id=6720eaa68bfd3f16df5f3670f1d2c06a840d7deee9ebb0afdbe8c5772f72a5d8 namespace=k8s.io Sep 13 00:07:39.822060 containerd[1473]: time="2025-09-13T00:07:39.821839768Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:07:40.137113 sshd[4552]: Accepted publickey for core from 147.75.109.163 port 60530 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:07:40.139172 sshd[4552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:40.146952 systemd-logind[1452]: New session 23 of user core. Sep 13 00:07:40.154825 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 13 00:07:40.160925 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6720eaa68bfd3f16df5f3670f1d2c06a840d7deee9ebb0afdbe8c5772f72a5d8-rootfs.mount: Deactivated successfully. Sep 13 00:07:40.700394 containerd[1473]: time="2025-09-13T00:07:40.700336526Z" level=info msg="CreateContainer within sandbox \"76f2b0131b65c1465edf19f099310881b88067c1db7d8571a7ba17a13506ecaf\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:07:40.733533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2263327141.mount: Deactivated successfully. Sep 13 00:07:40.739065 containerd[1473]: time="2025-09-13T00:07:40.739018628Z" level=info msg="CreateContainer within sandbox \"76f2b0131b65c1465edf19f099310881b88067c1db7d8571a7ba17a13506ecaf\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4d46703b349ce70122f704592da11bbc92867e320100d481f9076fbde0010c10\"" Sep 13 00:07:40.741318 containerd[1473]: time="2025-09-13T00:07:40.741283046Z" level=info msg="StartContainer for \"4d46703b349ce70122f704592da11bbc92867e320100d481f9076fbde0010c10\"" Sep 13 00:07:40.801815 systemd[1]: Started cri-containerd-4d46703b349ce70122f704592da11bbc92867e320100d481f9076fbde0010c10.scope - libcontainer container 4d46703b349ce70122f704592da11bbc92867e320100d481f9076fbde0010c10. Sep 13 00:07:40.839752 containerd[1473]: time="2025-09-13T00:07:40.839696065Z" level=info msg="StartContainer for \"4d46703b349ce70122f704592da11bbc92867e320100d481f9076fbde0010c10\" returns successfully" Sep 13 00:07:41.175654 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 13 00:07:41.721473 kubelet[2588]: I0913 00:07:41.721345 2588 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-x655x" podStartSLOduration=5.721323207 podStartE2EDuration="5.721323207s" podCreationTimestamp="2025-09-13 00:07:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:07:41.718975268 +0000 UTC m=+207.818165662" watchObservedRunningTime="2025-09-13 00:07:41.721323207 +0000 UTC m=+207.820513601" Sep 13 00:07:44.257107 systemd-networkd[1361]: lxc_health: Link UP Sep 13 00:07:44.290258 systemd-networkd[1361]: lxc_health: Gained carrier Sep 13 00:07:45.789718 systemd-networkd[1361]: lxc_health: Gained IPv6LL Sep 13 00:07:47.213441 systemd[1]: run-containerd-runc-k8s.io-4d46703b349ce70122f704592da11bbc92867e320100d481f9076fbde0010c10-runc.oamkp8.mount: Deactivated successfully. Sep 13 00:07:51.573145 systemd[1]: run-containerd-runc-k8s.io-4d46703b349ce70122f704592da11bbc92867e320100d481f9076fbde0010c10-runc.XqbyhN.mount: Deactivated successfully. Sep 13 00:07:51.805795 sshd[4552]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:51.811900 systemd[1]: sshd@23-49.13.17.32:22-147.75.109.163:60530.service: Deactivated successfully. Sep 13 00:07:51.815053 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:07:51.817616 systemd-logind[1452]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:07:51.819173 systemd-logind[1452]: Removed session 23. Sep 13 00:08:06.465983 kubelet[2588]: E0913 00:08:06.465477 2588 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:58110->10.0.0.2:2379: read: connection timed out" Sep 13 00:08:06.471920 systemd[1]: cri-containerd-f8b89545adc273339ee9e6f63f5f9b4cc3b41a6582691ff32c700dfc05f67427.scope: Deactivated successfully. Sep 13 00:08:06.472216 systemd[1]: cri-containerd-f8b89545adc273339ee9e6f63f5f9b4cc3b41a6582691ff32c700dfc05f67427.scope: Consumed 3.790s CPU time, 13.6M memory peak, 0B memory swap peak. Sep 13 00:08:06.501464 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8b89545adc273339ee9e6f63f5f9b4cc3b41a6582691ff32c700dfc05f67427-rootfs.mount: Deactivated successfully. Sep 13 00:08:06.510427 containerd[1473]: time="2025-09-13T00:08:06.510276315Z" level=info msg="shim disconnected" id=f8b89545adc273339ee9e6f63f5f9b4cc3b41a6582691ff32c700dfc05f67427 namespace=k8s.io Sep 13 00:08:06.510912 containerd[1473]: time="2025-09-13T00:08:06.510430998Z" level=warning msg="cleaning up after shim disconnected" id=f8b89545adc273339ee9e6f63f5f9b4cc3b41a6582691ff32c700dfc05f67427 namespace=k8s.io Sep 13 00:08:06.510912 containerd[1473]: time="2025-09-13T00:08:06.510453198Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:08:06.772091 kubelet[2588]: I0913 00:08:06.771972 2588 scope.go:117] "RemoveContainer" containerID="f8b89545adc273339ee9e6f63f5f9b4cc3b41a6582691ff32c700dfc05f67427" Sep 13 00:08:06.776086 containerd[1473]: time="2025-09-13T00:08:06.775828256Z" level=info msg="CreateContainer within sandbox \"6e602a6a3dba45e6ffc9fa651593d23e574e07658bf57f10392047a9ead2fa6e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 13 00:08:06.795025 containerd[1473]: time="2025-09-13T00:08:06.794966404Z" level=info msg="CreateContainer within sandbox \"6e602a6a3dba45e6ffc9fa651593d23e574e07658bf57f10392047a9ead2fa6e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"3a05b68303ec43abb1b7f47d9e2d52346f1e33b99e3b088566533d90988d7971\"" Sep 13 00:08:06.797158 containerd[1473]: time="2025-09-13T00:08:06.795627217Z" level=info msg="StartContainer for \"3a05b68303ec43abb1b7f47d9e2d52346f1e33b99e3b088566533d90988d7971\"" Sep 13 00:08:06.827793 systemd[1]: Started cri-containerd-3a05b68303ec43abb1b7f47d9e2d52346f1e33b99e3b088566533d90988d7971.scope - libcontainer container 3a05b68303ec43abb1b7f47d9e2d52346f1e33b99e3b088566533d90988d7971. Sep 13 00:08:06.863972 containerd[1473]: time="2025-09-13T00:08:06.863913481Z" level=info msg="StartContainer for \"3a05b68303ec43abb1b7f47d9e2d52346f1e33b99e3b088566533d90988d7971\" returns successfully" Sep 13 00:08:06.904932 systemd[1]: cri-containerd-1ed07ead55cc9431b17b83a4437e68cb373a9885225f970421a8c6c4d43fd95d.scope: Deactivated successfully. Sep 13 00:08:06.905918 systemd[1]: cri-containerd-1ed07ead55cc9431b17b83a4437e68cb373a9885225f970421a8c6c4d43fd95d.scope: Consumed 4.941s CPU time, 16.7M memory peak, 0B memory swap peak. Sep 13 00:08:06.938948 containerd[1473]: time="2025-09-13T00:08:06.938885520Z" level=info msg="shim disconnected" id=1ed07ead55cc9431b17b83a4437e68cb373a9885225f970421a8c6c4d43fd95d namespace=k8s.io Sep 13 00:08:06.939182 containerd[1473]: time="2025-09-13T00:08:06.939166926Z" level=warning msg="cleaning up after shim disconnected" id=1ed07ead55cc9431b17b83a4437e68cb373a9885225f970421a8c6c4d43fd95d namespace=k8s.io Sep 13 00:08:06.939242 containerd[1473]: time="2025-09-13T00:08:06.939230567Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:08:07.502868 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ed07ead55cc9431b17b83a4437e68cb373a9885225f970421a8c6c4d43fd95d-rootfs.mount: Deactivated successfully. Sep 13 00:08:07.781966 kubelet[2588]: I0913 00:08:07.780618 2588 scope.go:117] "RemoveContainer" containerID="1ed07ead55cc9431b17b83a4437e68cb373a9885225f970421a8c6c4d43fd95d" Sep 13 00:08:07.784563 containerd[1473]: time="2025-09-13T00:08:07.784143765Z" level=info msg="CreateContainer within sandbox \"5fa9badd06105b9070b32e017db173a571e9050b04c18f2ed869bbbfe3cad1f2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 13 00:08:07.803985 containerd[1473]: time="2025-09-13T00:08:07.803104826Z" level=info msg="CreateContainer within sandbox \"5fa9badd06105b9070b32e017db173a571e9050b04c18f2ed869bbbfe3cad1f2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"6a45838f7d8d3e069118d95b369a836f7f2e93fd234961adbb09b64b4165b439\"" Sep 13 00:08:07.803985 containerd[1473]: time="2025-09-13T00:08:07.803826480Z" level=info msg="StartContainer for \"6a45838f7d8d3e069118d95b369a836f7f2e93fd234961adbb09b64b4165b439\"" Sep 13 00:08:07.850768 systemd[1]: Started cri-containerd-6a45838f7d8d3e069118d95b369a836f7f2e93fd234961adbb09b64b4165b439.scope - libcontainer container 6a45838f7d8d3e069118d95b369a836f7f2e93fd234961adbb09b64b4165b439. Sep 13 00:08:07.900617 containerd[1473]: time="2025-09-13T00:08:07.900343261Z" level=info msg="StartContainer for \"6a45838f7d8d3e069118d95b369a836f7f2e93fd234961adbb09b64b4165b439\" returns successfully" Sep 13 00:08:11.905166 kubelet[2588]: E0913 00:08:11.904588 2588 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:57908->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-5-n-03d8b9aea3.1864aeeb96a5b18e kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-5-n-03d8b9aea3,UID:1b7de3a2d3a76ba4f8e0ed41950c57c0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-n-03d8b9aea3,},FirstTimestamp:2025-09-13 00:08:01.449324942 +0000 UTC m=+227.548515336,LastTimestamp:2025-09-13 00:08:01.449324942 +0000 UTC m=+227.548515336,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-n-03d8b9aea3,}"