Sep 13 00:01:19.971530 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 13 00:01:19.971575 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 12 22:36:20 -00 2025 Sep 13 00:01:19.971594 kernel: KASLR enabled Sep 13 00:01:19.971606 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Sep 13 00:01:19.971618 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Sep 13 00:01:19.971631 kernel: random: crng init done Sep 13 00:01:19.971646 kernel: ACPI: Early table checksum verification disabled Sep 13 00:01:19.971658 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Sep 13 00:01:19.971672 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Sep 13 00:01:19.971687 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:01:19.971700 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:01:19.971712 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:01:19.971724 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:01:19.971736 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:01:19.971751 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:01:19.971767 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:01:19.971780 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:01:19.971793 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:01:19.971806 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Sep 13 00:01:19.971818 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Sep 13 00:01:19.971831 kernel: NUMA: Failed to initialise from firmware Sep 13 00:01:19.971843 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Sep 13 00:01:19.971856 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Sep 13 00:01:19.971868 kernel: Zone ranges: Sep 13 00:01:19.971881 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 13 00:01:19.971909 kernel: DMA32 empty Sep 13 00:01:19.971929 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Sep 13 00:01:19.971942 kernel: Movable zone start for each node Sep 13 00:01:19.971958 kernel: Early memory node ranges Sep 13 00:01:19.971971 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Sep 13 00:01:19.971983 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Sep 13 00:01:19.971997 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Sep 13 00:01:19.972009 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Sep 13 00:01:19.972022 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Sep 13 00:01:19.972034 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Sep 13 00:01:19.972047 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Sep 13 00:01:19.972098 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Sep 13 00:01:19.972119 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Sep 13 00:01:19.972132 kernel: psci: probing for conduit method from ACPI. Sep 13 00:01:19.972145 kernel: psci: PSCIv1.1 detected in firmware. Sep 13 00:01:19.972163 kernel: psci: Using standard PSCI v0.2 function IDs Sep 13 00:01:19.972176 kernel: psci: Trusted OS migration not required Sep 13 00:01:19.972190 kernel: psci: SMC Calling Convention v1.1 Sep 13 00:01:19.972206 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 13 00:01:19.972219 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 13 00:01:19.972263 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 13 00:01:19.972278 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 13 00:01:19.972292 kernel: Detected PIPT I-cache on CPU0 Sep 13 00:01:19.972299 kernel: CPU features: detected: GIC system register CPU interface Sep 13 00:01:19.972306 kernel: CPU features: detected: Hardware dirty bit management Sep 13 00:01:19.972364 kernel: CPU features: detected: Spectre-v4 Sep 13 00:01:19.972372 kernel: CPU features: detected: Spectre-BHB Sep 13 00:01:19.972380 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 13 00:01:19.972390 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 13 00:01:19.972397 kernel: CPU features: detected: ARM erratum 1418040 Sep 13 00:01:19.972404 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 13 00:01:19.972411 kernel: alternatives: applying boot alternatives Sep 13 00:01:19.972420 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=e1b46f3c9e154636c32f6cde6e746a00a6b37ca7432cb4e16d172c05f584a8c9 Sep 13 00:01:19.972427 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:01:19.972434 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 00:01:19.972441 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:01:19.972448 kernel: Fallback order for Node 0: 0 Sep 13 00:01:19.972455 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Sep 13 00:01:19.972462 kernel: Policy zone: Normal Sep 13 00:01:19.972471 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:01:19.972478 kernel: software IO TLB: area num 2. Sep 13 00:01:19.972485 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Sep 13 00:01:19.972493 kernel: Memory: 3882744K/4096000K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39488K init, 897K bss, 213256K reserved, 0K cma-reserved) Sep 13 00:01:19.972500 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 13 00:01:19.972507 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:01:19.972515 kernel: rcu: RCU event tracing is enabled. Sep 13 00:01:19.972523 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 13 00:01:19.972530 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:01:19.972537 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:01:19.972544 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:01:19.972553 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 13 00:01:19.972560 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 13 00:01:19.972567 kernel: GICv3: 256 SPIs implemented Sep 13 00:01:19.972574 kernel: GICv3: 0 Extended SPIs implemented Sep 13 00:01:19.972581 kernel: Root IRQ handler: gic_handle_irq Sep 13 00:01:19.972588 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 13 00:01:19.972595 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 13 00:01:19.972602 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 13 00:01:19.972609 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Sep 13 00:01:19.972617 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Sep 13 00:01:19.972624 kernel: GICv3: using LPI property table @0x00000001000e0000 Sep 13 00:01:19.972631 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Sep 13 00:01:19.972640 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 00:01:19.972647 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 00:01:19.972654 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 13 00:01:19.972661 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 13 00:01:19.972668 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 13 00:01:19.972676 kernel: Console: colour dummy device 80x25 Sep 13 00:01:19.972683 kernel: ACPI: Core revision 20230628 Sep 13 00:01:19.972691 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 13 00:01:19.972698 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:01:19.972706 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 13 00:01:19.972715 kernel: landlock: Up and running. Sep 13 00:01:19.972722 kernel: SELinux: Initializing. Sep 13 00:01:19.972729 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:01:19.972736 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:01:19.972744 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:01:19.972752 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:01:19.972759 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:01:19.972766 kernel: rcu: Max phase no-delay instances is 400. Sep 13 00:01:19.972775 kernel: Platform MSI: ITS@0x8080000 domain created Sep 13 00:01:19.972784 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 13 00:01:19.972792 kernel: Remapping and enabling EFI services. Sep 13 00:01:19.972799 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:01:19.972806 kernel: Detected PIPT I-cache on CPU1 Sep 13 00:01:19.972814 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 13 00:01:19.972821 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Sep 13 00:01:19.972829 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 00:01:19.972842 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 13 00:01:19.972850 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 00:01:19.972857 kernel: SMP: Total of 2 processors activated. Sep 13 00:01:19.972867 kernel: CPU features: detected: 32-bit EL0 Support Sep 13 00:01:19.972875 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 13 00:01:19.972888 kernel: CPU features: detected: Common not Private translations Sep 13 00:01:19.972897 kernel: CPU features: detected: CRC32 instructions Sep 13 00:01:19.972905 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 13 00:01:19.972914 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 13 00:01:19.972921 kernel: CPU features: detected: LSE atomic instructions Sep 13 00:01:19.972929 kernel: CPU features: detected: Privileged Access Never Sep 13 00:01:19.972937 kernel: CPU features: detected: RAS Extension Support Sep 13 00:01:19.972947 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 13 00:01:19.972955 kernel: CPU: All CPU(s) started at EL1 Sep 13 00:01:19.972962 kernel: alternatives: applying system-wide alternatives Sep 13 00:01:19.972970 kernel: devtmpfs: initialized Sep 13 00:01:19.972978 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:01:19.972986 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 13 00:01:19.972994 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:01:19.973004 kernel: SMBIOS 3.0.0 present. Sep 13 00:01:19.973012 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Sep 13 00:01:19.973020 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:01:19.973028 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 13 00:01:19.973036 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 13 00:01:19.973044 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 13 00:01:19.973052 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:01:19.973068 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Sep 13 00:01:19.973076 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:01:19.976262 kernel: cpuidle: using governor menu Sep 13 00:01:19.976278 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 13 00:01:19.976286 kernel: ASID allocator initialised with 32768 entries Sep 13 00:01:19.976294 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:01:19.976302 kernel: Serial: AMBA PL011 UART driver Sep 13 00:01:19.976310 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 13 00:01:19.976318 kernel: Modules: 0 pages in range for non-PLT usage Sep 13 00:01:19.976326 kernel: Modules: 508992 pages in range for PLT usage Sep 13 00:01:19.976333 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:01:19.976350 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 00:01:19.976358 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 13 00:01:19.976366 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 13 00:01:19.976373 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:01:19.976381 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 00:01:19.976389 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 13 00:01:19.976397 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 13 00:01:19.976404 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:01:19.976412 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:01:19.976422 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:01:19.976429 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:01:19.976437 kernel: ACPI: Interpreter enabled Sep 13 00:01:19.976444 kernel: ACPI: Using GIC for interrupt routing Sep 13 00:01:19.976452 kernel: ACPI: MCFG table detected, 1 entries Sep 13 00:01:19.976459 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 13 00:01:19.976467 kernel: printk: console [ttyAMA0] enabled Sep 13 00:01:19.976474 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:01:19.976706 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:01:19.976789 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 13 00:01:19.976861 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 13 00:01:19.976930 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 13 00:01:19.977009 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 13 00:01:19.977021 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 13 00:01:19.977029 kernel: PCI host bridge to bus 0000:00 Sep 13 00:01:19.978337 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 13 00:01:19.978471 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 13 00:01:19.978571 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 13 00:01:19.978637 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:01:19.978740 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 13 00:01:19.978824 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Sep 13 00:01:19.978899 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Sep 13 00:01:19.978977 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Sep 13 00:01:19.979121 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Sep 13 00:01:19.979211 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Sep 13 00:01:19.979292 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Sep 13 00:01:19.979362 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Sep 13 00:01:19.979440 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Sep 13 00:01:19.979516 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Sep 13 00:01:19.981312 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Sep 13 00:01:19.981443 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Sep 13 00:01:19.981529 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Sep 13 00:01:19.981617 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Sep 13 00:01:19.981709 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Sep 13 00:01:19.981809 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Sep 13 00:01:19.981889 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Sep 13 00:01:19.981958 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Sep 13 00:01:19.982035 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Sep 13 00:01:19.983594 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Sep 13 00:01:19.983719 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Sep 13 00:01:19.983810 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Sep 13 00:01:19.983895 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Sep 13 00:01:19.983976 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Sep 13 00:01:19.986212 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Sep 13 00:01:19.986365 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Sep 13 00:01:19.986442 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 13 00:01:19.986513 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Sep 13 00:01:19.986606 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Sep 13 00:01:19.986677 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Sep 13 00:01:19.986758 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Sep 13 00:01:19.986829 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Sep 13 00:01:19.986899 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Sep 13 00:01:19.986980 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Sep 13 00:01:19.987052 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Sep 13 00:01:19.987277 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Sep 13 00:01:19.987352 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Sep 13 00:01:19.987438 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Sep 13 00:01:19.987511 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Sep 13 00:01:19.987581 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Sep 13 00:01:19.987661 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Sep 13 00:01:19.987738 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Sep 13 00:01:19.987807 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Sep 13 00:01:19.987879 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Sep 13 00:01:19.987956 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Sep 13 00:01:19.988024 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Sep 13 00:01:19.990246 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Sep 13 00:01:19.990376 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Sep 13 00:01:19.990446 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Sep 13 00:01:19.990514 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Sep 13 00:01:19.990587 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Sep 13 00:01:19.990655 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Sep 13 00:01:19.990724 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Sep 13 00:01:19.990794 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Sep 13 00:01:19.990862 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Sep 13 00:01:19.990934 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Sep 13 00:01:19.991008 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Sep 13 00:01:19.991117 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Sep 13 00:01:19.991194 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Sep 13 00:01:19.991273 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Sep 13 00:01:19.991341 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Sep 13 00:01:19.991409 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Sep 13 00:01:19.991490 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 13 00:01:19.991557 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Sep 13 00:01:19.991625 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Sep 13 00:01:19.991697 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 13 00:01:19.991767 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Sep 13 00:01:19.991834 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Sep 13 00:01:19.991912 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 13 00:01:19.991981 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Sep 13 00:01:19.992051 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Sep 13 00:01:19.992224 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Sep 13 00:01:19.992301 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Sep 13 00:01:19.992372 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Sep 13 00:01:19.992440 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Sep 13 00:01:19.992513 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Sep 13 00:01:19.992588 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Sep 13 00:01:19.992659 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Sep 13 00:01:19.992734 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Sep 13 00:01:19.992815 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Sep 13 00:01:19.992885 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Sep 13 00:01:19.992956 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Sep 13 00:01:19.993025 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 13 00:01:19.994491 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Sep 13 00:01:19.994609 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 13 00:01:19.994712 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Sep 13 00:01:19.994781 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 13 00:01:19.994855 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Sep 13 00:01:19.994922 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Sep 13 00:01:19.994996 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Sep 13 00:01:19.995117 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Sep 13 00:01:19.995198 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Sep 13 00:01:19.995269 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Sep 13 00:01:19.995343 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Sep 13 00:01:19.995411 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Sep 13 00:01:19.995483 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Sep 13 00:01:19.995564 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Sep 13 00:01:19.995648 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Sep 13 00:01:19.995739 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Sep 13 00:01:19.995832 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Sep 13 00:01:19.995900 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Sep 13 00:01:19.995988 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Sep 13 00:01:19.996077 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Sep 13 00:01:19.996164 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Sep 13 00:01:19.996270 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Sep 13 00:01:19.996355 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Sep 13 00:01:19.996429 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Sep 13 00:01:19.996514 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Sep 13 00:01:19.996592 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Sep 13 00:01:19.996744 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Sep 13 00:01:19.996844 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Sep 13 00:01:19.996924 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 13 00:01:19.997009 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Sep 13 00:01:19.997192 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Sep 13 00:01:19.997312 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Sep 13 00:01:19.997387 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Sep 13 00:01:19.997463 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Sep 13 00:01:19.997548 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Sep 13 00:01:19.997622 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Sep 13 00:01:19.997705 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Sep 13 00:01:19.997795 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Sep 13 00:01:19.997862 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Sep 13 00:01:19.997940 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Sep 13 00:01:19.998010 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Sep 13 00:01:19.998111 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Sep 13 00:01:19.998209 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Sep 13 00:01:19.998284 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Sep 13 00:01:19.998351 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Sep 13 00:01:19.998431 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Sep 13 00:01:19.998504 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Sep 13 00:01:19.998573 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Sep 13 00:01:19.998654 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Sep 13 00:01:19.998725 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Sep 13 00:01:19.998815 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Sep 13 00:01:19.998894 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Sep 13 00:01:19.998979 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Sep 13 00:01:19.999258 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Sep 13 00:01:19.999380 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Sep 13 00:01:19.999463 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Sep 13 00:01:19.999534 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Sep 13 00:01:19.999644 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Sep 13 00:01:19.999718 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Sep 13 00:01:19.999822 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Sep 13 00:01:19.999899 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 13 00:01:19.999977 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Sep 13 00:01:20.000050 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Sep 13 00:01:20.000229 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Sep 13 00:01:20.000305 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Sep 13 00:01:20.000372 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Sep 13 00:01:20.000439 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Sep 13 00:01:20.000511 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 13 00:01:20.000582 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Sep 13 00:01:20.000651 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Sep 13 00:01:20.000717 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Sep 13 00:01:20.000782 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 13 00:01:20.000853 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Sep 13 00:01:20.000920 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Sep 13 00:01:20.000992 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Sep 13 00:01:20.001093 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Sep 13 00:01:20.001227 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 13 00:01:20.001304 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 13 00:01:20.001372 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 13 00:01:20.001463 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Sep 13 00:01:20.001529 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Sep 13 00:01:20.001602 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Sep 13 00:01:20.001691 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Sep 13 00:01:20.001754 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Sep 13 00:01:20.001817 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Sep 13 00:01:20.001902 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Sep 13 00:01:20.001980 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Sep 13 00:01:20.002042 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Sep 13 00:01:20.002262 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Sep 13 00:01:20.002347 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Sep 13 00:01:20.002432 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Sep 13 00:01:20.002537 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Sep 13 00:01:20.002611 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Sep 13 00:01:20.002675 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Sep 13 00:01:20.002771 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Sep 13 00:01:20.002851 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Sep 13 00:01:20.002916 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 13 00:01:20.003002 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Sep 13 00:01:20.003114 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Sep 13 00:01:20.003201 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 13 00:01:20.003293 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Sep 13 00:01:20.003357 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Sep 13 00:01:20.003432 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 13 00:01:20.003517 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Sep 13 00:01:20.003587 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Sep 13 00:01:20.003666 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Sep 13 00:01:20.003682 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 13 00:01:20.003691 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 13 00:01:20.003700 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 13 00:01:20.003709 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 13 00:01:20.003719 kernel: iommu: Default domain type: Translated Sep 13 00:01:20.003727 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 13 00:01:20.003735 kernel: efivars: Registered efivars operations Sep 13 00:01:20.003743 kernel: vgaarb: loaded Sep 13 00:01:20.003751 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 13 00:01:20.003761 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:01:20.003769 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:01:20.003777 kernel: pnp: PnP ACPI init Sep 13 00:01:20.003876 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 13 00:01:20.003889 kernel: pnp: PnP ACPI: found 1 devices Sep 13 00:01:20.003899 kernel: NET: Registered PF_INET protocol family Sep 13 00:01:20.003908 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:01:20.003919 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 13 00:01:20.003931 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:01:20.003941 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:01:20.003949 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 13 00:01:20.003957 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 13 00:01:20.003965 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:01:20.003973 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:01:20.003981 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:01:20.004120 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Sep 13 00:01:20.004138 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:01:20.004154 kernel: kvm [1]: HYP mode not available Sep 13 00:01:20.004163 kernel: Initialise system trusted keyrings Sep 13 00:01:20.004172 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 13 00:01:20.004181 kernel: Key type asymmetric registered Sep 13 00:01:20.005219 kernel: Asymmetric key parser 'x509' registered Sep 13 00:01:20.005250 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 13 00:01:20.005260 kernel: io scheduler mq-deadline registered Sep 13 00:01:20.005271 kernel: io scheduler kyber registered Sep 13 00:01:20.005280 kernel: io scheduler bfq registered Sep 13 00:01:20.005297 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 13 00:01:20.005450 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Sep 13 00:01:20.005524 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Sep 13 00:01:20.005592 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 00:01:20.005671 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Sep 13 00:01:20.005741 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Sep 13 00:01:20.005813 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 00:01:20.005886 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Sep 13 00:01:20.005955 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Sep 13 00:01:20.006021 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 00:01:20.006277 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Sep 13 00:01:20.006363 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Sep 13 00:01:20.006439 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 00:01:20.006512 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Sep 13 00:01:20.006582 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Sep 13 00:01:20.006650 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 00:01:20.006721 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Sep 13 00:01:20.006792 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Sep 13 00:01:20.006860 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 00:01:20.006963 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Sep 13 00:01:20.007037 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Sep 13 00:01:20.007262 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 00:01:20.007354 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Sep 13 00:01:20.007431 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Sep 13 00:01:20.007519 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 00:01:20.007532 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Sep 13 00:01:20.007620 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Sep 13 00:01:20.007697 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Sep 13 00:01:20.007781 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 00:01:20.007794 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 13 00:01:20.007804 kernel: ACPI: button: Power Button [PWRB] Sep 13 00:01:20.007816 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 13 00:01:20.007907 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Sep 13 00:01:20.008002 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Sep 13 00:01:20.008015 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:01:20.008024 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 13 00:01:20.008194 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Sep 13 00:01:20.008209 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Sep 13 00:01:20.008218 kernel: thunder_xcv, ver 1.0 Sep 13 00:01:20.008227 kernel: thunder_bgx, ver 1.0 Sep 13 00:01:20.008240 kernel: nicpf, ver 1.0 Sep 13 00:01:20.008251 kernel: nicvf, ver 1.0 Sep 13 00:01:20.008360 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 13 00:01:20.008433 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-13T00:01:19 UTC (1757721679) Sep 13 00:01:20.008444 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 13 00:01:20.008452 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 13 00:01:20.008460 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 13 00:01:20.008468 kernel: watchdog: Hard watchdog permanently disabled Sep 13 00:01:20.008478 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:01:20.008487 kernel: Segment Routing with IPv6 Sep 13 00:01:20.008495 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:01:20.008502 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:01:20.008510 kernel: Key type dns_resolver registered Sep 13 00:01:20.008519 kernel: registered taskstats version 1 Sep 13 00:01:20.008527 kernel: Loading compiled-in X.509 certificates Sep 13 00:01:20.008535 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 036ad4721a31543be5c000f2896b40d1e5515c6e' Sep 13 00:01:20.008542 kernel: Key type .fscrypt registered Sep 13 00:01:20.008552 kernel: Key type fscrypt-provisioning registered Sep 13 00:01:20.008560 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:01:20.008569 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:01:20.008577 kernel: ima: No architecture policies found Sep 13 00:01:20.008585 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 13 00:01:20.008592 kernel: clk: Disabling unused clocks Sep 13 00:01:20.008601 kernel: Freeing unused kernel memory: 39488K Sep 13 00:01:20.008609 kernel: Run /init as init process Sep 13 00:01:20.008617 kernel: with arguments: Sep 13 00:01:20.008627 kernel: /init Sep 13 00:01:20.008635 kernel: with environment: Sep 13 00:01:20.008642 kernel: HOME=/ Sep 13 00:01:20.008650 kernel: TERM=linux Sep 13 00:01:20.008658 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:01:20.008668 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:01:20.008678 systemd[1]: Detected virtualization kvm. Sep 13 00:01:20.008687 systemd[1]: Detected architecture arm64. Sep 13 00:01:20.008697 systemd[1]: Running in initrd. Sep 13 00:01:20.008705 systemd[1]: No hostname configured, using default hostname. Sep 13 00:01:20.008716 systemd[1]: Hostname set to . Sep 13 00:01:20.008725 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:01:20.008733 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:01:20.008742 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:01:20.008751 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:01:20.008760 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 00:01:20.008770 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:01:20.008779 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 00:01:20.008788 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 00:01:20.008798 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 00:01:20.008807 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 00:01:20.008815 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:01:20.008826 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:01:20.008835 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:01:20.008843 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:01:20.008851 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:01:20.008860 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:01:20.008868 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:01:20.008877 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:01:20.008888 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:01:20.008898 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:01:20.008909 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:01:20.008919 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:01:20.008928 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:01:20.008938 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:01:20.008949 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 00:01:20.008958 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:01:20.008969 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 00:01:20.008980 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:01:20.008989 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:01:20.009001 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:01:20.009010 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:01:20.009019 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 00:01:20.009027 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:01:20.009088 systemd-journald[237]: Collecting audit messages is disabled. Sep 13 00:01:20.009139 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:01:20.009151 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:01:20.009161 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:01:20.009175 kernel: Bridge firewalling registered Sep 13 00:01:20.009184 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:01:20.009194 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:01:20.009207 systemd-journald[237]: Journal started Sep 13 00:01:20.009230 systemd-journald[237]: Runtime Journal (/run/log/journal/6ee70abbc0af4f24b8295f79990cc0dc) is 8.0M, max 76.6M, 68.6M free. Sep 13 00:01:19.961753 systemd-modules-load[238]: Inserted module 'overlay' Sep 13 00:01:19.983937 systemd-modules-load[238]: Inserted module 'br_netfilter' Sep 13 00:01:20.014735 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:01:20.019139 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:01:20.019215 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:01:20.029161 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:01:20.040403 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:01:20.051572 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:01:20.054250 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:01:20.057520 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:01:20.070302 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 00:01:20.074171 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:01:20.075816 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:01:20.087384 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:01:20.093430 dracut-cmdline[270]: dracut-dracut-053 Sep 13 00:01:20.098964 dracut-cmdline[270]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=e1b46f3c9e154636c32f6cde6e746a00a6b37ca7432cb4e16d172c05f584a8c9 Sep 13 00:01:20.135608 systemd-resolved[277]: Positive Trust Anchors: Sep 13 00:01:20.136585 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:01:20.136625 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:01:20.148811 systemd-resolved[277]: Defaulting to hostname 'linux'. Sep 13 00:01:20.151034 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:01:20.152126 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:01:20.217139 kernel: SCSI subsystem initialized Sep 13 00:01:20.222131 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:01:20.230123 kernel: iscsi: registered transport (tcp) Sep 13 00:01:20.245218 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:01:20.245364 kernel: QLogic iSCSI HBA Driver Sep 13 00:01:20.296155 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 00:01:20.311490 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 00:01:20.334630 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:01:20.334772 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:01:20.334809 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 13 00:01:20.414097 kernel: raid6: neonx8 gen() 15603 MB/s Sep 13 00:01:20.416172 kernel: raid6: neonx4 gen() 15546 MB/s Sep 13 00:01:20.432178 kernel: raid6: neonx2 gen() 13057 MB/s Sep 13 00:01:20.449172 kernel: raid6: neonx1 gen() 10340 MB/s Sep 13 00:01:20.466211 kernel: raid6: int64x8 gen() 6887 MB/s Sep 13 00:01:20.483151 kernel: raid6: int64x4 gen() 7252 MB/s Sep 13 00:01:20.500135 kernel: raid6: int64x2 gen() 6101 MB/s Sep 13 00:01:20.517135 kernel: raid6: int64x1 gen() 4986 MB/s Sep 13 00:01:20.517204 kernel: raid6: using algorithm neonx8 gen() 15603 MB/s Sep 13 00:01:20.534129 kernel: raid6: .... xor() 11863 MB/s, rmw enabled Sep 13 00:01:20.534209 kernel: raid6: using neon recovery algorithm Sep 13 00:01:20.540153 kernel: xor: measuring software checksum speed Sep 13 00:01:20.540210 kernel: 8regs : 19821 MB/sec Sep 13 00:01:20.540235 kernel: 32regs : 19646 MB/sec Sep 13 00:01:20.540258 kernel: arm64_neon : 26795 MB/sec Sep 13 00:01:20.541125 kernel: xor: using function: arm64_neon (26795 MB/sec) Sep 13 00:01:20.597136 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 00:01:20.617203 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:01:20.626851 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:01:20.645696 systemd-udevd[455]: Using default interface naming scheme 'v255'. Sep 13 00:01:20.649657 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:01:20.658648 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 00:01:20.683940 dracut-pre-trigger[462]: rd.md=0: removing MD RAID activation Sep 13 00:01:20.733205 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:01:20.738516 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:01:20.803450 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:01:20.813883 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 00:01:20.847822 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 00:01:20.850292 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:01:20.852481 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:01:20.853348 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:01:20.862382 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 00:01:20.895031 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:01:20.933992 kernel: scsi host0: Virtio SCSI HBA Sep 13 00:01:20.948917 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 13 00:01:20.949010 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Sep 13 00:01:20.950145 kernel: ACPI: bus type USB registered Sep 13 00:01:20.962493 kernel: usbcore: registered new interface driver usbfs Sep 13 00:01:20.962559 kernel: usbcore: registered new interface driver hub Sep 13 00:01:20.962570 kernel: usbcore: registered new device driver usb Sep 13 00:01:20.973940 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:01:20.975579 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:01:20.978445 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:01:20.980074 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:01:20.980336 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:01:20.982155 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:01:20.989430 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:01:21.011157 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 13 00:01:21.011421 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Sep 13 00:01:21.011533 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Sep 13 00:01:21.013130 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 13 00:01:21.013394 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Sep 13 00:01:21.013492 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Sep 13 00:01:21.015124 kernel: hub 1-0:1.0: USB hub found Sep 13 00:01:21.015384 kernel: hub 1-0:1.0: 4 ports detected Sep 13 00:01:21.015478 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Sep 13 00:01:21.016176 kernel: hub 2-0:1.0: USB hub found Sep 13 00:01:21.016373 kernel: hub 2-0:1.0: 4 ports detected Sep 13 00:01:21.023429 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:01:21.029261 kernel: sr 0:0:0:0: Power-on or device reset occurred Sep 13 00:01:21.032276 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Sep 13 00:01:21.032543 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 00:01:21.032557 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Sep 13 00:01:21.032946 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:01:21.058668 kernel: sd 0:0:0:1: Power-on or device reset occurred Sep 13 00:01:21.059009 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Sep 13 00:01:21.059230 kernel: sd 0:0:0:1: [sda] Write Protect is off Sep 13 00:01:21.061024 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Sep 13 00:01:21.061471 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 13 00:01:21.067148 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:01:21.067226 kernel: GPT:17805311 != 80003071 Sep 13 00:01:21.067237 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:01:21.067249 kernel: GPT:17805311 != 80003071 Sep 13 00:01:21.067260 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:01:21.069116 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:01:21.071349 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Sep 13 00:01:21.073668 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:01:21.128122 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (513) Sep 13 00:01:21.131429 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Sep 13 00:01:21.136001 kernel: BTRFS: device fsid 29bc4da8-c689-46a2-a16a-b7bbc722db77 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (502) Sep 13 00:01:21.138070 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Sep 13 00:01:21.155318 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 13 00:01:21.160553 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Sep 13 00:01:21.163339 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Sep 13 00:01:21.174390 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 00:01:21.183768 disk-uuid[571]: Primary Header is updated. Sep 13 00:01:21.183768 disk-uuid[571]: Secondary Entries is updated. Sep 13 00:01:21.183768 disk-uuid[571]: Secondary Header is updated. Sep 13 00:01:21.201930 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:01:21.252169 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Sep 13 00:01:21.388132 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Sep 13 00:01:21.388963 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Sep 13 00:01:21.390128 kernel: usbcore: registered new interface driver usbhid Sep 13 00:01:21.390168 kernel: usbhid: USB HID core driver Sep 13 00:01:21.495466 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Sep 13 00:01:21.645152 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Sep 13 00:01:21.698799 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Sep 13 00:01:22.227761 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:01:22.227854 disk-uuid[572]: The operation has completed successfully. Sep 13 00:01:22.301457 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:01:22.301704 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 00:01:22.311437 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 00:01:22.327569 sh[587]: Success Sep 13 00:01:22.346134 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 13 00:01:22.433513 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 00:01:22.438856 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 00:01:22.442746 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 00:01:22.466520 kernel: BTRFS info (device dm-0): first mount of filesystem 29bc4da8-c689-46a2-a16a-b7bbc722db77 Sep 13 00:01:22.466608 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:01:22.466638 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 13 00:01:22.466655 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 00:01:22.467466 kernel: BTRFS info (device dm-0): using free space tree Sep 13 00:01:22.477160 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 13 00:01:22.479719 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 00:01:22.484182 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 00:01:22.494448 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 00:01:22.497428 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 00:01:22.518298 kernel: BTRFS info (device sda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:01:22.518378 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:01:22.518400 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:01:22.526162 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:01:22.526255 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:01:22.541880 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:01:22.543109 kernel: BTRFS info (device sda6): last unmount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:01:22.550464 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 00:01:22.558496 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 00:01:22.660186 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:01:22.672263 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:01:22.704693 ignition[678]: Ignition 2.19.0 Sep 13 00:01:22.704714 ignition[678]: Stage: fetch-offline Sep 13 00:01:22.707929 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:01:22.704765 ignition[678]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:01:22.704774 ignition[678]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:01:22.704957 ignition[678]: parsed url from cmdline: "" Sep 13 00:01:22.712162 systemd-networkd[774]: lo: Link UP Sep 13 00:01:22.704960 ignition[678]: no config URL provided Sep 13 00:01:22.712168 systemd-networkd[774]: lo: Gained carrier Sep 13 00:01:22.704965 ignition[678]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:01:22.714075 systemd-networkd[774]: Enumeration completed Sep 13 00:01:22.704971 ignition[678]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:01:22.714994 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:01:22.704977 ignition[678]: failed to fetch config: resource requires networking Sep 13 00:01:22.714998 systemd-networkd[774]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:01:22.705300 ignition[678]: Ignition finished successfully Sep 13 00:01:22.716802 systemd-networkd[774]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:01:22.716806 systemd-networkd[774]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:01:22.717552 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:01:22.717679 systemd-networkd[774]: eth0: Link UP Sep 13 00:01:22.717684 systemd-networkd[774]: eth0: Gained carrier Sep 13 00:01:22.717695 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:01:22.718855 systemd[1]: Reached target network.target - Network. Sep 13 00:01:22.723760 systemd-networkd[774]: eth1: Link UP Sep 13 00:01:22.723764 systemd-networkd[774]: eth1: Gained carrier Sep 13 00:01:22.723778 systemd-networkd[774]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:01:22.726343 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 13 00:01:22.744738 ignition[779]: Ignition 2.19.0 Sep 13 00:01:22.744752 ignition[779]: Stage: fetch Sep 13 00:01:22.744988 ignition[779]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:01:22.745000 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:01:22.745165 ignition[779]: parsed url from cmdline: "" Sep 13 00:01:22.745170 ignition[779]: no config URL provided Sep 13 00:01:22.745177 ignition[779]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:01:22.745186 ignition[779]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:01:22.745215 ignition[779]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Sep 13 00:01:22.746023 ignition[779]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Sep 13 00:01:22.758279 systemd-networkd[774]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 13 00:01:22.785208 systemd-networkd[774]: eth0: DHCPv4 address 91.99.150.175/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 13 00:01:22.946241 ignition[779]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Sep 13 00:01:22.953557 ignition[779]: GET result: OK Sep 13 00:01:22.954173 ignition[779]: parsing config with SHA512: a5b9f9d395fdf007533f98969236617e1acf35c2de6ca5bcc858ecbfd27708a1ea3d937b476d7898a2b6e2703605d390860a279b2f1b850c8a20040f5c8a7a3e Sep 13 00:01:22.960626 unknown[779]: fetched base config from "system" Sep 13 00:01:22.960639 unknown[779]: fetched base config from "system" Sep 13 00:01:22.960644 unknown[779]: fetched user config from "hetzner" Sep 13 00:01:22.963395 ignition[779]: fetch: fetch complete Sep 13 00:01:22.963411 ignition[779]: fetch: fetch passed Sep 13 00:01:22.963520 ignition[779]: Ignition finished successfully Sep 13 00:01:22.966676 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 13 00:01:22.978119 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 00:01:23.009168 ignition[787]: Ignition 2.19.0 Sep 13 00:01:23.009188 ignition[787]: Stage: kargs Sep 13 00:01:23.009665 ignition[787]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:01:23.009692 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:01:23.013124 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 00:01:23.011223 ignition[787]: kargs: kargs passed Sep 13 00:01:23.011309 ignition[787]: Ignition finished successfully Sep 13 00:01:23.023368 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 00:01:23.040866 ignition[793]: Ignition 2.19.0 Sep 13 00:01:23.040882 ignition[793]: Stage: disks Sep 13 00:01:23.041154 ignition[793]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:01:23.041164 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:01:23.043471 ignition[793]: disks: disks passed Sep 13 00:01:23.043598 ignition[793]: Ignition finished successfully Sep 13 00:01:23.046741 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 00:01:23.048467 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 00:01:23.049297 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:01:23.050545 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:01:23.051898 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:01:23.053117 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:01:23.060385 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 00:01:23.087155 systemd-fsck[802]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 13 00:01:23.093342 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 00:01:23.106358 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 00:01:23.163447 kernel: EXT4-fs (sda9): mounted filesystem d35fd879-6758-447b-9fdd-bb21dd7c5b2b r/w with ordered data mode. Quota mode: none. Sep 13 00:01:23.165651 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 00:01:23.169134 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 00:01:23.181336 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:01:23.187154 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 00:01:23.198434 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 13 00:01:23.199320 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:01:23.199364 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:01:23.203809 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 00:01:23.209143 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (810) Sep 13 00:01:23.211141 kernel: BTRFS info (device sda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:01:23.211213 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:01:23.211227 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:01:23.216349 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 00:01:23.220118 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:01:23.220210 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:01:23.227476 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:01:23.286506 coreos-metadata[812]: Sep 13 00:01:23.286 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Sep 13 00:01:23.289751 coreos-metadata[812]: Sep 13 00:01:23.288 INFO Fetch successful Sep 13 00:01:23.291220 coreos-metadata[812]: Sep 13 00:01:23.291 INFO wrote hostname ci-4081-3-5-n-dc9d7711ed to /sysroot/etc/hostname Sep 13 00:01:23.295342 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 13 00:01:23.300574 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:01:23.306868 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:01:23.313062 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:01:23.318391 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:01:23.445906 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 00:01:23.450267 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 00:01:23.453375 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 00:01:23.465820 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 00:01:23.467461 kernel: BTRFS info (device sda6): last unmount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:01:23.497472 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 00:01:23.502941 ignition[928]: INFO : Ignition 2.19.0 Sep 13 00:01:23.504042 ignition[928]: INFO : Stage: mount Sep 13 00:01:23.504864 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:01:23.506284 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:01:23.508309 ignition[928]: INFO : mount: mount passed Sep 13 00:01:23.508309 ignition[928]: INFO : Ignition finished successfully Sep 13 00:01:23.511357 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 00:01:23.517415 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 00:01:23.538526 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:01:23.554705 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (940) Sep 13 00:01:23.554791 kernel: BTRFS info (device sda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:01:23.555607 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:01:23.555662 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:01:23.559148 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:01:23.559211 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:01:23.562572 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:01:23.593446 ignition[957]: INFO : Ignition 2.19.0 Sep 13 00:01:23.593446 ignition[957]: INFO : Stage: files Sep 13 00:01:23.596063 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:01:23.596063 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:01:23.596063 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:01:23.601362 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:01:23.601362 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:01:23.603544 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:01:23.604803 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:01:23.604803 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:01:23.604191 unknown[957]: wrote ssh authorized keys file for user: core Sep 13 00:01:23.608457 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 13 00:01:23.608457 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 13 00:01:23.746462 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 00:01:23.976675 systemd-networkd[774]: eth0: Gained IPv6LL Sep 13 00:01:24.100754 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 13 00:01:24.100754 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 00:01:24.100754 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 13 00:01:24.232993 systemd-networkd[774]: eth1: Gained IPv6LL Sep 13 00:01:24.310123 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 13 00:01:24.412120 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 00:01:24.412120 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:01:24.412120 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:01:24.412120 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:01:24.412120 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:01:24.412120 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:01:24.412120 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:01:24.412120 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:01:24.412120 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:01:24.424600 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:01:24.424600 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:01:24.424600 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:01:24.424600 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:01:24.424600 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:01:24.424600 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 13 00:01:24.602698 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 13 00:01:24.907555 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:01:24.907555 ignition[957]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 13 00:01:24.913384 ignition[957]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:01:24.913384 ignition[957]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:01:24.913384 ignition[957]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 13 00:01:24.913384 ignition[957]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 13 00:01:24.913384 ignition[957]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 13 00:01:24.913384 ignition[957]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 13 00:01:24.913384 ignition[957]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 13 00:01:24.913384 ignition[957]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:01:24.913384 ignition[957]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:01:24.913384 ignition[957]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:01:24.913384 ignition[957]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:01:24.913384 ignition[957]: INFO : files: files passed Sep 13 00:01:24.913384 ignition[957]: INFO : Ignition finished successfully Sep 13 00:01:24.919401 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 00:01:24.932321 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 00:01:24.935529 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 00:01:24.943216 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:01:24.943485 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 00:01:24.971414 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:01:24.971414 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:01:24.976155 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:01:24.980377 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:01:24.981871 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 00:01:24.994536 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 00:01:25.036225 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:01:25.037496 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 00:01:25.038987 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 00:01:25.040219 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 00:01:25.041769 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 00:01:25.044198 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 00:01:25.077521 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:01:25.085388 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 00:01:25.114359 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:01:25.116111 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:01:25.117698 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 00:01:25.119063 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:01:25.119268 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:01:25.122466 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 00:01:25.123739 systemd[1]: Stopped target basic.target - Basic System. Sep 13 00:01:25.125064 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 00:01:25.126584 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:01:25.128410 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 00:01:25.129880 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 00:01:25.131463 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:01:25.133004 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 00:01:25.134455 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 00:01:25.135581 systemd[1]: Stopped target swap.target - Swaps. Sep 13 00:01:25.136586 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:01:25.136742 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:01:25.138519 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:01:25.139980 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:01:25.141905 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 00:01:25.142047 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:01:25.143739 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:01:25.143898 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 00:01:25.145912 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:01:25.146216 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:01:25.147757 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:01:25.147988 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 00:01:25.148971 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 13 00:01:25.149177 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 13 00:01:25.156479 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 00:01:25.159243 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:01:25.161326 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:01:25.167523 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 00:01:25.172266 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:01:25.172566 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:01:25.178408 ignition[1010]: INFO : Ignition 2.19.0 Sep 13 00:01:25.178408 ignition[1010]: INFO : Stage: umount Sep 13 00:01:25.179733 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:01:25.179733 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:01:25.183493 ignition[1010]: INFO : umount: umount passed Sep 13 00:01:25.183493 ignition[1010]: INFO : Ignition finished successfully Sep 13 00:01:25.181196 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:01:25.181442 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:01:25.189182 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:01:25.189451 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 00:01:25.197193 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:01:25.197446 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 00:01:25.199585 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:01:25.199711 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 00:01:25.202322 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:01:25.202403 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 00:01:25.203764 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 13 00:01:25.203829 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 13 00:01:25.206506 systemd[1]: Stopped target network.target - Network. Sep 13 00:01:25.207424 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:01:25.207517 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:01:25.208720 systemd[1]: Stopped target paths.target - Path Units. Sep 13 00:01:25.210729 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:01:25.214170 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:01:25.216151 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 00:01:25.218399 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 00:01:25.221064 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:01:25.221171 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:01:25.223885 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:01:25.224004 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:01:25.225953 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:01:25.226047 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 00:01:25.228005 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 00:01:25.228117 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 00:01:25.232367 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 00:01:25.233768 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 00:01:25.237346 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:01:25.238286 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:01:25.239550 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 00:01:25.240434 systemd-networkd[774]: eth0: DHCPv6 lease lost Sep 13 00:01:25.241773 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:01:25.241887 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 00:01:25.245217 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:01:25.245383 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 00:01:25.245712 systemd-networkd[774]: eth1: DHCPv6 lease lost Sep 13 00:01:25.251546 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:01:25.251898 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 00:01:25.255321 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:01:25.255398 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:01:25.263376 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 00:01:25.264482 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:01:25.264606 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:01:25.267067 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:01:25.267210 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:01:25.269127 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:01:25.269212 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 00:01:25.270548 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 00:01:25.270603 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:01:25.276405 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:01:25.295323 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:01:25.295579 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 00:01:25.299927 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:01:25.300312 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:01:25.302899 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:01:25.303003 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 00:01:25.305069 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:01:25.305147 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:01:25.306459 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:01:25.306524 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:01:25.308533 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:01:25.308602 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 00:01:25.310175 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:01:25.310242 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:01:25.324249 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 00:01:25.325667 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:01:25.325797 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:01:25.327677 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:01:25.327759 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:01:25.337521 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:01:25.337727 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 00:01:25.339315 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 00:01:25.348409 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 00:01:25.359791 systemd[1]: Switching root. Sep 13 00:01:25.411104 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Sep 13 00:01:25.411221 systemd-journald[237]: Journal stopped Sep 13 00:01:26.613758 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:01:26.613864 kernel: SELinux: policy capability open_perms=1 Sep 13 00:01:26.613877 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:01:26.613894 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:01:26.613907 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:01:26.613920 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:01:26.613932 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:01:26.613944 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:01:26.613956 kernel: audit: type=1403 audit(1757721685.608:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:01:26.613976 systemd[1]: Successfully loaded SELinux policy in 44.022ms. Sep 13 00:01:26.614027 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.075ms. Sep 13 00:01:26.614043 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:01:26.614061 systemd[1]: Detected virtualization kvm. Sep 13 00:01:26.614072 systemd[1]: Detected architecture arm64. Sep 13 00:01:26.614094 systemd[1]: Detected first boot. Sep 13 00:01:26.614105 systemd[1]: Hostname set to . Sep 13 00:01:26.614121 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:01:26.614132 zram_generator::config[1053]: No configuration found. Sep 13 00:01:26.614145 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:01:26.614156 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 00:01:26.614169 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 13 00:01:26.614260 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 00:01:26.614277 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 13 00:01:26.614289 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 13 00:01:26.614300 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 13 00:01:26.614311 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 13 00:01:26.614322 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 13 00:01:26.614341 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 13 00:01:26.614354 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 13 00:01:26.614365 systemd[1]: Created slice user.slice - User and Session Slice. Sep 13 00:01:26.614377 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:01:26.614388 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:01:26.614399 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 13 00:01:26.614412 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 13 00:01:26.614426 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 13 00:01:26.614439 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:01:26.614452 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 13 00:01:26.614472 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:01:26.614485 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 13 00:01:26.614498 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 13 00:01:26.614511 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 13 00:01:26.614523 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 13 00:01:26.614537 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:01:26.614551 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:01:26.614565 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:01:26.614576 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:01:26.614587 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 13 00:01:26.614603 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 13 00:01:26.614613 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:01:26.614628 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:01:26.614641 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:01:26.614653 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 13 00:01:26.614666 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 13 00:01:26.614681 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 13 00:01:26.614692 systemd[1]: Mounting media.mount - External Media Directory... Sep 13 00:01:26.614710 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 13 00:01:26.614724 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 13 00:01:26.614736 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 13 00:01:26.614751 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:01:26.614764 systemd[1]: Reached target machines.target - Containers. Sep 13 00:01:26.614776 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 13 00:01:26.614791 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:01:26.614810 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:01:26.614825 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 13 00:01:26.614839 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:01:26.614852 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:01:26.614866 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:01:26.614883 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 13 00:01:26.614896 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:01:26.614908 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:01:26.614919 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 00:01:26.614931 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 13 00:01:26.614942 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 00:01:26.614954 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 00:01:26.614965 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:01:26.614978 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:01:26.614990 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 00:01:26.615016 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 13 00:01:26.615030 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:01:26.615044 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 00:01:26.615054 systemd[1]: Stopped verity-setup.service. Sep 13 00:01:26.615065 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 13 00:01:26.615077 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 13 00:01:26.618902 kernel: fuse: init (API version 7.39) Sep 13 00:01:26.618934 systemd[1]: Mounted media.mount - External Media Directory. Sep 13 00:01:26.618946 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 13 00:01:26.618957 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 13 00:01:26.618968 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 13 00:01:26.618980 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:01:26.618994 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:01:26.619067 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 13 00:01:26.619095 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:01:26.619110 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:01:26.619121 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:01:26.619132 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:01:26.619143 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:01:26.619155 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 13 00:01:26.619170 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:01:26.619181 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 13 00:01:26.619192 kernel: loop: module loaded Sep 13 00:01:26.619203 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 13 00:01:26.619214 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 13 00:01:26.619225 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:01:26.619238 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:01:26.619250 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:01:26.619262 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 00:01:26.619273 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 13 00:01:26.619284 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 13 00:01:26.619295 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 13 00:01:26.619348 systemd-journald[1127]: Collecting audit messages is disabled. Sep 13 00:01:26.619378 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 00:01:26.619390 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:01:26.619401 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:01:26.619412 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 13 00:01:26.619424 systemd-journald[1127]: Journal started Sep 13 00:01:26.619453 systemd-journald[1127]: Runtime Journal (/run/log/journal/6ee70abbc0af4f24b8295f79990cc0dc) is 8.0M, max 76.6M, 68.6M free. Sep 13 00:01:26.225421 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:01:26.248577 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 13 00:01:26.249184 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 00:01:26.630107 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 13 00:01:26.636307 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 13 00:01:26.639119 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:01:26.647617 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 13 00:01:26.655221 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:01:26.660220 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 13 00:01:26.664168 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:01:26.674113 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 13 00:01:26.683127 kernel: ACPI: bus type drm_connector registered Sep 13 00:01:26.691513 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 13 00:01:26.691619 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:01:26.696502 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:01:26.696739 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:01:26.699713 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:01:26.710987 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 13 00:01:26.770459 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 13 00:01:26.772788 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 13 00:01:26.775917 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 13 00:01:26.787415 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 13 00:01:26.797283 kernel: loop0: detected capacity change from 0 to 114328 Sep 13 00:01:26.806433 systemd-journald[1127]: Time spent on flushing to /var/log/journal/6ee70abbc0af4f24b8295f79990cc0dc is 31.609ms for 1131 entries. Sep 13 00:01:26.806433 systemd-journald[1127]: System Journal (/var/log/journal/6ee70abbc0af4f24b8295f79990cc0dc) is 8.0M, max 584.8M, 576.8M free. Sep 13 00:01:26.850157 systemd-journald[1127]: Received client request to flush runtime journal. Sep 13 00:01:26.850255 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:01:26.857520 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 13 00:01:26.862114 kernel: loop1: detected capacity change from 0 to 114432 Sep 13 00:01:26.864256 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:01:26.876403 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 13 00:01:26.878739 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:01:26.881154 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 13 00:01:26.882743 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 13 00:01:26.896669 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:01:26.906111 kernel: loop2: detected capacity change from 0 to 8 Sep 13 00:01:26.920578 udevadm[1183]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 13 00:01:26.938528 kernel: loop3: detected capacity change from 0 to 203944 Sep 13 00:01:26.977271 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Sep 13 00:01:26.978443 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Sep 13 00:01:26.993892 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:01:27.014323 kernel: loop4: detected capacity change from 0 to 114328 Sep 13 00:01:27.036188 kernel: loop5: detected capacity change from 0 to 114432 Sep 13 00:01:27.058657 kernel: loop6: detected capacity change from 0 to 8 Sep 13 00:01:27.062131 kernel: loop7: detected capacity change from 0 to 203944 Sep 13 00:01:27.092418 (sd-merge)[1193]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Sep 13 00:01:27.093044 (sd-merge)[1193]: Merged extensions into '/usr'. Sep 13 00:01:27.103834 systemd[1]: Reloading requested from client PID 1151 ('systemd-sysext') (unit systemd-sysext.service)... Sep 13 00:01:27.104173 systemd[1]: Reloading... Sep 13 00:01:27.272219 zram_generator::config[1223]: No configuration found. Sep 13 00:01:27.403758 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:01:27.454349 ldconfig[1147]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:01:27.457762 systemd[1]: Reloading finished in 352 ms. Sep 13 00:01:27.484425 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 13 00:01:27.487190 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 13 00:01:27.498555 systemd[1]: Starting ensure-sysext.service... Sep 13 00:01:27.512274 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:01:27.526733 systemd[1]: Reloading requested from client PID 1257 ('systemctl') (unit ensure-sysext.service)... Sep 13 00:01:27.526754 systemd[1]: Reloading... Sep 13 00:01:27.562613 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:01:27.562936 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 00:01:27.563768 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:01:27.564025 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Sep 13 00:01:27.566052 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Sep 13 00:01:27.574392 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:01:27.574404 systemd-tmpfiles[1258]: Skipping /boot Sep 13 00:01:27.594787 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:01:27.594806 systemd-tmpfiles[1258]: Skipping /boot Sep 13 00:01:27.671509 zram_generator::config[1285]: No configuration found. Sep 13 00:01:27.796288 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:01:27.848650 systemd[1]: Reloading finished in 321 ms. Sep 13 00:01:27.868078 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 13 00:01:27.876812 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:01:27.895547 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:01:27.902737 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 13 00:01:27.908338 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 13 00:01:27.915802 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:01:27.928342 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:01:27.935390 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 13 00:01:27.944562 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 13 00:01:27.947630 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:01:27.951696 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:01:27.958549 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:01:27.963396 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:01:27.965336 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:01:27.970169 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:01:27.970361 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:01:27.972791 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:01:27.977459 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:01:27.978420 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:01:27.984196 systemd[1]: Finished ensure-sysext.service. Sep 13 00:01:27.993475 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 13 00:01:28.019922 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:01:28.023812 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:01:28.025754 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:01:28.026673 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:01:28.029225 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:01:28.035969 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 13 00:01:28.038498 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:01:28.044436 systemd-udevd[1335]: Using default interface naming scheme 'v255'. Sep 13 00:01:28.057576 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:01:28.060225 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:01:28.064751 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 13 00:01:28.069863 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:01:28.071665 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:01:28.071929 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:01:28.086686 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 13 00:01:28.107425 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 13 00:01:28.109012 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:01:28.111582 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 13 00:01:28.123514 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:01:28.154995 augenrules[1383]: No rules Sep 13 00:01:28.163780 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:01:28.168183 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 13 00:01:28.285222 systemd-networkd[1377]: lo: Link UP Sep 13 00:01:28.285235 systemd-networkd[1377]: lo: Gained carrier Sep 13 00:01:28.286034 systemd-networkd[1377]: Enumeration completed Sep 13 00:01:28.286200 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:01:28.298433 systemd-resolved[1331]: Positive Trust Anchors: Sep 13 00:01:28.298650 systemd-resolved[1331]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:01:28.298684 systemd-resolved[1331]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:01:28.308613 systemd-resolved[1331]: Using system hostname 'ci-4081-3-5-n-dc9d7711ed'. Sep 13 00:01:28.321908 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 13 00:01:28.323038 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:01:28.324369 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 13 00:01:28.325337 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 13 00:01:28.325516 systemd[1]: Reached target network.target - Network. Sep 13 00:01:28.326753 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:01:28.327954 systemd[1]: Reached target time-set.target - System Time Set. Sep 13 00:01:28.401189 systemd-networkd[1377]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:01:28.401205 systemd-networkd[1377]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:01:28.404178 systemd-networkd[1377]: eth0: Link UP Sep 13 00:01:28.404188 systemd-networkd[1377]: eth0: Gained carrier Sep 13 00:01:28.404214 systemd-networkd[1377]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:01:28.429882 systemd-networkd[1377]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:01:28.429901 systemd-networkd[1377]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:01:28.432600 systemd-networkd[1377]: eth1: Link UP Sep 13 00:01:28.432611 systemd-networkd[1377]: eth1: Gained carrier Sep 13 00:01:28.432635 systemd-networkd[1377]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:01:28.442114 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1361) Sep 13 00:01:28.469116 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:01:28.472611 systemd-networkd[1377]: eth0: DHCPv4 address 91.99.150.175/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 13 00:01:28.478284 systemd-networkd[1377]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 13 00:01:28.478750 systemd-timesyncd[1343]: Network configuration changed, trying to establish connection. Sep 13 00:01:28.065224 systemd-resolved[1331]: Clock change detected. Flushing caches. Sep 13 00:01:28.075192 systemd-journald[1127]: Time jumped backwards, rotating. Sep 13 00:01:28.065434 systemd-timesyncd[1343]: Contacted time server 93.241.86.156:123 (0.flatcar.pool.ntp.org). Sep 13 00:01:28.065646 systemd-timesyncd[1343]: Initial clock synchronization to Sat 2025-09-13 00:01:28.065159 UTC. Sep 13 00:01:28.086926 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 13 00:01:28.104927 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 13 00:01:28.107348 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Sep 13 00:01:28.107575 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:01:28.110777 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:01:28.113366 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:01:28.119818 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:01:28.120709 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:01:28.120763 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:01:28.135381 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:01:28.135761 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:01:28.147776 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Sep 13 00:01:28.147890 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 13 00:01:28.147902 kernel: [drm] features: -context_init Sep 13 00:01:28.158096 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 13 00:01:28.168069 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:01:28.168250 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:01:28.170135 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:01:28.171895 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:01:28.176521 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:01:28.176640 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:01:28.179642 kernel: [drm] number of scanouts: 1 Sep 13 00:01:28.179709 kernel: [drm] number of cap sets: 0 Sep 13 00:01:28.195574 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Sep 13 00:01:28.207094 kernel: Console: switching to colour frame buffer device 160x50 Sep 13 00:01:28.214975 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:01:28.217709 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 13 00:01:28.229929 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:01:28.231629 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:01:28.240945 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:01:28.319605 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:01:28.358149 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 13 00:01:28.365871 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 13 00:01:28.389624 lvm[1440]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:01:28.421371 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 13 00:01:28.424041 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:01:28.424899 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:01:28.425638 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 13 00:01:28.426409 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 13 00:01:28.427646 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 13 00:01:28.428417 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 13 00:01:28.429248 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 13 00:01:28.430023 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:01:28.430069 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:01:28.430600 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:01:28.434654 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 13 00:01:28.437294 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 13 00:01:28.443050 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 13 00:01:28.449761 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 13 00:01:28.451249 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 13 00:01:28.457824 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:01:28.458390 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:01:28.459034 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:01:28.459062 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:01:28.470918 systemd[1]: Starting containerd.service - containerd container runtime... Sep 13 00:01:28.475792 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 13 00:01:28.477330 lvm[1444]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:01:28.481043 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 13 00:01:28.484870 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 13 00:01:28.488398 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 13 00:01:28.490747 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 13 00:01:28.492304 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 13 00:01:28.503897 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 13 00:01:28.509851 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Sep 13 00:01:28.514784 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 13 00:01:28.519254 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 13 00:01:28.525302 jq[1448]: false Sep 13 00:01:28.525222 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 13 00:01:28.527779 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:01:28.528399 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:01:28.530417 systemd[1]: Starting update-engine.service - Update Engine... Sep 13 00:01:28.541756 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 13 00:01:28.544582 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 13 00:01:28.557009 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:01:28.557213 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 13 00:01:28.576652 jq[1459]: true Sep 13 00:01:28.593158 dbus-daemon[1447]: [system] SELinux support is enabled Sep 13 00:01:28.594243 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 13 00:01:28.599075 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:01:28.599127 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 13 00:01:28.601382 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:01:28.601414 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 13 00:01:28.618588 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:01:28.618803 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 13 00:01:28.636854 extend-filesystems[1449]: Found loop4 Sep 13 00:01:28.636854 extend-filesystems[1449]: Found loop5 Sep 13 00:01:28.636854 extend-filesystems[1449]: Found loop6 Sep 13 00:01:28.636854 extend-filesystems[1449]: Found loop7 Sep 13 00:01:28.636854 extend-filesystems[1449]: Found sda Sep 13 00:01:28.647394 extend-filesystems[1449]: Found sda1 Sep 13 00:01:28.647394 extend-filesystems[1449]: Found sda2 Sep 13 00:01:28.647394 extend-filesystems[1449]: Found sda3 Sep 13 00:01:28.647394 extend-filesystems[1449]: Found usr Sep 13 00:01:28.647394 extend-filesystems[1449]: Found sda4 Sep 13 00:01:28.647394 extend-filesystems[1449]: Found sda6 Sep 13 00:01:28.647394 extend-filesystems[1449]: Found sda7 Sep 13 00:01:28.647394 extend-filesystems[1449]: Found sda9 Sep 13 00:01:28.647394 extend-filesystems[1449]: Checking size of /dev/sda9 Sep 13 00:01:28.670633 tar[1463]: linux-arm64/helm Sep 13 00:01:28.639938 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:01:28.640172 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 13 00:01:28.650432 (ntainerd)[1478]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 13 00:01:28.671339 jq[1475]: true Sep 13 00:01:28.681317 systemd-logind[1457]: New seat seat0. Sep 13 00:01:28.684787 systemd-logind[1457]: Watching system buttons on /dev/input/event0 (Power Button) Sep 13 00:01:28.696538 extend-filesystems[1449]: Resized partition /dev/sda9 Sep 13 00:01:28.684806 systemd-logind[1457]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Sep 13 00:01:28.698526 coreos-metadata[1446]: Sep 13 00:01:28.697 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Sep 13 00:01:28.696099 systemd[1]: Started systemd-logind.service - User Login Management. Sep 13 00:01:28.704641 extend-filesystems[1490]: resize2fs 1.47.1 (20-May-2024) Sep 13 00:01:28.725331 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Sep 13 00:01:28.725359 coreos-metadata[1446]: Sep 13 00:01:28.703 INFO Fetch successful Sep 13 00:01:28.725359 coreos-metadata[1446]: Sep 13 00:01:28.703 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Sep 13 00:01:28.725359 coreos-metadata[1446]: Sep 13 00:01:28.707 INFO Fetch successful Sep 13 00:01:28.752659 update_engine[1458]: I20250913 00:01:28.749825 1458 main.cc:92] Flatcar Update Engine starting Sep 13 00:01:28.766703 systemd[1]: Started update-engine.service - Update Engine. Sep 13 00:01:28.772092 update_engine[1458]: I20250913 00:01:28.771742 1458 update_check_scheduler.cc:74] Next update check in 3m2s Sep 13 00:01:28.776965 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 13 00:01:28.887074 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 13 00:01:28.889291 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 13 00:01:28.892935 bash[1517]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:01:28.895049 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 13 00:01:28.898601 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1365) Sep 13 00:01:28.917924 systemd[1]: Starting sshkeys.service... Sep 13 00:01:28.933970 containerd[1478]: time="2025-09-13T00:01:28.933739972Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 13 00:01:28.947574 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Sep 13 00:01:28.971185 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 13 00:01:28.984473 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 13 00:01:28.991835 extend-filesystems[1490]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Sep 13 00:01:28.991835 extend-filesystems[1490]: old_desc_blocks = 1, new_desc_blocks = 5 Sep 13 00:01:28.991835 extend-filesystems[1490]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Sep 13 00:01:28.997058 extend-filesystems[1449]: Resized filesystem in /dev/sda9 Sep 13 00:01:28.997058 extend-filesystems[1449]: Found sr0 Sep 13 00:01:28.999990 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:01:29.000917 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 13 00:01:29.019368 containerd[1478]: time="2025-09-13T00:01:29.019059532Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:01:29.021963 containerd[1478]: time="2025-09-13T00:01:29.021914012Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:01:29.022075 containerd[1478]: time="2025-09-13T00:01:29.022060732Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:01:29.022129 containerd[1478]: time="2025-09-13T00:01:29.022116652Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:01:29.022455 containerd[1478]: time="2025-09-13T00:01:29.022409732Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 13 00:01:29.022589 containerd[1478]: time="2025-09-13T00:01:29.022569572Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 13 00:01:29.023564 containerd[1478]: time="2025-09-13T00:01:29.022715372Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:01:29.023564 containerd[1478]: time="2025-09-13T00:01:29.022736812Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:01:29.023564 containerd[1478]: time="2025-09-13T00:01:29.022931412Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:01:29.023564 containerd[1478]: time="2025-09-13T00:01:29.022949412Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:01:29.023564 containerd[1478]: time="2025-09-13T00:01:29.022963892Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:01:29.023564 containerd[1478]: time="2025-09-13T00:01:29.022974772Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:01:29.023564 containerd[1478]: time="2025-09-13T00:01:29.023060732Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:01:29.023564 containerd[1478]: time="2025-09-13T00:01:29.023256372Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:01:29.023564 containerd[1478]: time="2025-09-13T00:01:29.023355812Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:01:29.023564 containerd[1478]: time="2025-09-13T00:01:29.023369412Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:01:29.023564 containerd[1478]: time="2025-09-13T00:01:29.023506452Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:01:29.023866 containerd[1478]: time="2025-09-13T00:01:29.023845932Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:01:29.035741 containerd[1478]: time="2025-09-13T00:01:29.035391572Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:01:29.035741 containerd[1478]: time="2025-09-13T00:01:29.035489772Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:01:29.035741 containerd[1478]: time="2025-09-13T00:01:29.035510332Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 13 00:01:29.035741 containerd[1478]: time="2025-09-13T00:01:29.035530612Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 13 00:01:29.035969 containerd[1478]: time="2025-09-13T00:01:29.035944412Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:01:29.036242 containerd[1478]: time="2025-09-13T00:01:29.036216092Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:01:29.036632 containerd[1478]: time="2025-09-13T00:01:29.036607172Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:01:29.036873 containerd[1478]: time="2025-09-13T00:01:29.036851012Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 13 00:01:29.036946 containerd[1478]: time="2025-09-13T00:01:29.036933092Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 13 00:01:29.037024 containerd[1478]: time="2025-09-13T00:01:29.037008532Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 13 00:01:29.037086 containerd[1478]: time="2025-09-13T00:01:29.037069692Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:01:29.037144 containerd[1478]: time="2025-09-13T00:01:29.037131132Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:01:29.037198 containerd[1478]: time="2025-09-13T00:01:29.037184532Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:01:29.037255 containerd[1478]: time="2025-09-13T00:01:29.037241492Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:01:29.037321 containerd[1478]: time="2025-09-13T00:01:29.037306332Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:01:29.037497 containerd[1478]: time="2025-09-13T00:01:29.037430812Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:01:29.037589 containerd[1478]: time="2025-09-13T00:01:29.037574972Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:01:29.037642 containerd[1478]: time="2025-09-13T00:01:29.037630732Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:01:29.037732 containerd[1478]: time="2025-09-13T00:01:29.037716972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:01:29.037820 containerd[1478]: time="2025-09-13T00:01:29.037804932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:01:29.037901 containerd[1478]: time="2025-09-13T00:01:29.037887852Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:01:29.042596 containerd[1478]: time="2025-09-13T00:01:29.040750652Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:01:29.042596 containerd[1478]: time="2025-09-13T00:01:29.040801932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:01:29.042596 containerd[1478]: time="2025-09-13T00:01:29.040822212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:01:29.042596 containerd[1478]: time="2025-09-13T00:01:29.040839572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:01:29.042596 containerd[1478]: time="2025-09-13T00:01:29.040882052Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:01:29.042596 containerd[1478]: time="2025-09-13T00:01:29.040896932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 13 00:01:29.042596 containerd[1478]: time="2025-09-13T00:01:29.040918692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 13 00:01:29.042596 containerd[1478]: time="2025-09-13T00:01:29.040932212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:01:29.042596 containerd[1478]: time="2025-09-13T00:01:29.040947052Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 13 00:01:29.042596 containerd[1478]: time="2025-09-13T00:01:29.040963852Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:01:29.042596 containerd[1478]: time="2025-09-13T00:01:29.040982012Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 13 00:01:29.042596 containerd[1478]: time="2025-09-13T00:01:29.041013612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 13 00:01:29.042596 containerd[1478]: time="2025-09-13T00:01:29.041025652Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:01:29.042596 containerd[1478]: time="2025-09-13T00:01:29.041038252Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:01:29.042957 containerd[1478]: time="2025-09-13T00:01:29.041187052Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:01:29.042957 containerd[1478]: time="2025-09-13T00:01:29.041209892Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 13 00:01:29.042957 containerd[1478]: time="2025-09-13T00:01:29.041221732Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:01:29.042957 containerd[1478]: time="2025-09-13T00:01:29.041234132Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 13 00:01:29.042957 containerd[1478]: time="2025-09-13T00:01:29.041259972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:01:29.042957 containerd[1478]: time="2025-09-13T00:01:29.041276052Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 13 00:01:29.042957 containerd[1478]: time="2025-09-13T00:01:29.041286692Z" level=info msg="NRI interface is disabled by configuration." Sep 13 00:01:29.042957 containerd[1478]: time="2025-09-13T00:01:29.041299452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:01:29.043096 containerd[1478]: time="2025-09-13T00:01:29.041694812Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:01:29.043096 containerd[1478]: time="2025-09-13T00:01:29.041764492Z" level=info msg="Connect containerd service" Sep 13 00:01:29.043096 containerd[1478]: time="2025-09-13T00:01:29.042009972Z" level=info msg="using legacy CRI server" Sep 13 00:01:29.043096 containerd[1478]: time="2025-09-13T00:01:29.042019092Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 13 00:01:29.043096 containerd[1478]: time="2025-09-13T00:01:29.042125772Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:01:29.044284 containerd[1478]: time="2025-09-13T00:01:29.044238332Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:01:29.045055 containerd[1478]: time="2025-09-13T00:01:29.045026612Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:01:29.045211 containerd[1478]: time="2025-09-13T00:01:29.045193812Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:01:29.045345 containerd[1478]: time="2025-09-13T00:01:29.045266372Z" level=info msg="Start subscribing containerd event" Sep 13 00:01:29.045393 containerd[1478]: time="2025-09-13T00:01:29.045376092Z" level=info msg="Start recovering state" Sep 13 00:01:29.045535 containerd[1478]: time="2025-09-13T00:01:29.045517252Z" level=info msg="Start event monitor" Sep 13 00:01:29.045580 containerd[1478]: time="2025-09-13T00:01:29.045537132Z" level=info msg="Start snapshots syncer" Sep 13 00:01:29.045580 containerd[1478]: time="2025-09-13T00:01:29.045561492Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:01:29.045580 containerd[1478]: time="2025-09-13T00:01:29.045571732Z" level=info msg="Start streaming server" Sep 13 00:01:29.045835 containerd[1478]: time="2025-09-13T00:01:29.045811212Z" level=info msg="containerd successfully booted in 0.117225s" Sep 13 00:01:29.046074 systemd[1]: Started containerd.service - containerd container runtime. Sep 13 00:01:29.053223 locksmithd[1504]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:01:29.069813 coreos-metadata[1525]: Sep 13 00:01:29.069 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Sep 13 00:01:29.071258 coreos-metadata[1525]: Sep 13 00:01:29.071 INFO Fetch successful Sep 13 00:01:29.074779 unknown[1525]: wrote ssh authorized keys file for user: core Sep 13 00:01:29.115243 update-ssh-keys[1537]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:01:29.116670 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 13 00:01:29.126640 systemd[1]: Finished sshkeys.service. Sep 13 00:01:29.191392 sshd_keygen[1485]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:01:29.225614 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 13 00:01:29.233698 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 13 00:01:29.243072 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:01:29.243619 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 13 00:01:29.253247 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 13 00:01:29.266499 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 13 00:01:29.274890 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 13 00:01:29.282057 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 13 00:01:29.283487 systemd[1]: Reached target getty.target - Login Prompts. Sep 13 00:01:29.375633 tar[1463]: linux-arm64/LICENSE Sep 13 00:01:29.375633 tar[1463]: linux-arm64/README.md Sep 13 00:01:29.387473 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 13 00:01:29.483823 systemd-networkd[1377]: eth1: Gained IPv6LL Sep 13 00:01:29.487677 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 13 00:01:29.490034 systemd[1]: Reached target network-online.target - Network is Online. Sep 13 00:01:29.503310 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:01:29.507265 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 13 00:01:29.543690 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 13 00:01:29.803800 systemd-networkd[1377]: eth0: Gained IPv6LL Sep 13 00:01:30.349516 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:01:30.352395 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 13 00:01:30.357915 systemd[1]: Startup finished in 910ms (kernel) + 5.894s (initrd) + 5.237s (userspace) = 12.042s. Sep 13 00:01:30.359059 (kubelet)[1576]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:01:30.924200 kubelet[1576]: E0913 00:01:30.924097 1576 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:01:30.926713 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:01:30.926879 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:01:41.179459 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:01:41.188921 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:01:41.308022 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:01:41.313487 (kubelet)[1595]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:01:41.360252 kubelet[1595]: E0913 00:01:41.360166 1595 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:01:41.365433 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:01:41.365836 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:01:51.581906 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 00:01:51.589910 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:01:51.725143 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:01:51.733962 (kubelet)[1610]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:01:51.777944 kubelet[1610]: E0913 00:01:51.777890 1610 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:01:51.780443 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:01:51.780662 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:02:01.832005 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 13 00:02:01.838794 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:02:01.962916 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:02:01.966502 (kubelet)[1626]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:02:02.015895 kubelet[1626]: E0913 00:02:02.015820 1626 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:02:02.019362 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:02:02.019743 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:02:06.000140 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 13 00:02:06.005951 systemd[1]: Started sshd@0-91.99.150.175:22-147.75.109.163:56758.service - OpenSSH per-connection server daemon (147.75.109.163:56758). Sep 13 00:02:06.997195 sshd[1634]: Accepted publickey for core from 147.75.109.163 port 56758 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:02:06.999504 sshd[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:02:07.010131 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 13 00:02:07.015912 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 13 00:02:07.018778 systemd-logind[1457]: New session 1 of user core. Sep 13 00:02:07.032858 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 13 00:02:07.045015 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 13 00:02:07.049861 (systemd)[1638]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:02:07.163177 systemd[1638]: Queued start job for default target default.target. Sep 13 00:02:07.171488 systemd[1638]: Created slice app.slice - User Application Slice. Sep 13 00:02:07.171630 systemd[1638]: Reached target paths.target - Paths. Sep 13 00:02:07.171659 systemd[1638]: Reached target timers.target - Timers. Sep 13 00:02:07.174315 systemd[1638]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 13 00:02:07.189925 systemd[1638]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 13 00:02:07.190055 systemd[1638]: Reached target sockets.target - Sockets. Sep 13 00:02:07.190070 systemd[1638]: Reached target basic.target - Basic System. Sep 13 00:02:07.190116 systemd[1638]: Reached target default.target - Main User Target. Sep 13 00:02:07.190150 systemd[1638]: Startup finished in 132ms. Sep 13 00:02:07.190271 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 13 00:02:07.197793 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 13 00:02:07.897122 systemd[1]: Started sshd@1-91.99.150.175:22-147.75.109.163:56774.service - OpenSSH per-connection server daemon (147.75.109.163:56774). Sep 13 00:02:08.873586 sshd[1649]: Accepted publickey for core from 147.75.109.163 port 56774 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:02:08.876115 sshd[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:02:08.883127 systemd-logind[1457]: New session 2 of user core. Sep 13 00:02:08.887852 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 13 00:02:09.550585 sshd[1649]: pam_unix(sshd:session): session closed for user core Sep 13 00:02:09.555146 systemd-logind[1457]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:02:09.556023 systemd[1]: sshd@1-91.99.150.175:22-147.75.109.163:56774.service: Deactivated successfully. Sep 13 00:02:09.559351 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:02:09.560484 systemd-logind[1457]: Removed session 2. Sep 13 00:02:09.740125 systemd[1]: Started sshd@2-91.99.150.175:22-147.75.109.163:56786.service - OpenSSH per-connection server daemon (147.75.109.163:56786). Sep 13 00:02:10.794907 sshd[1656]: Accepted publickey for core from 147.75.109.163 port 56786 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:02:10.797119 sshd[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:02:10.805904 systemd-logind[1457]: New session 3 of user core. Sep 13 00:02:10.809844 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 13 00:02:11.521202 sshd[1656]: pam_unix(sshd:session): session closed for user core Sep 13 00:02:11.525993 systemd-logind[1457]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:02:11.527477 systemd[1]: sshd@2-91.99.150.175:22-147.75.109.163:56786.service: Deactivated successfully. Sep 13 00:02:11.529712 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:02:11.531320 systemd-logind[1457]: Removed session 3. Sep 13 00:02:11.696195 systemd[1]: Started sshd@3-91.99.150.175:22-147.75.109.163:50830.service - OpenSSH per-connection server daemon (147.75.109.163:50830). Sep 13 00:02:12.081574 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 13 00:02:12.090831 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:02:12.205698 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:02:12.216957 (kubelet)[1673]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:02:12.263000 kubelet[1673]: E0913 00:02:12.262950 1673 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:02:12.265030 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:02:12.265164 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:02:12.668408 sshd[1663]: Accepted publickey for core from 147.75.109.163 port 50830 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:02:12.670399 sshd[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:02:12.676421 systemd-logind[1457]: New session 4 of user core. Sep 13 00:02:12.689255 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 13 00:02:13.344290 sshd[1663]: pam_unix(sshd:session): session closed for user core Sep 13 00:02:13.349198 systemd[1]: sshd@3-91.99.150.175:22-147.75.109.163:50830.service: Deactivated successfully. Sep 13 00:02:13.351268 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:02:13.352279 systemd-logind[1457]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:02:13.355245 systemd-logind[1457]: Removed session 4. Sep 13 00:02:13.517903 systemd[1]: Started sshd@4-91.99.150.175:22-147.75.109.163:50844.service - OpenSSH per-connection server daemon (147.75.109.163:50844). Sep 13 00:02:14.317625 update_engine[1458]: I20250913 00:02:14.316956 1458 update_attempter.cc:509] Updating boot flags... Sep 13 00:02:14.362582 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1696) Sep 13 00:02:14.423831 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1699) Sep 13 00:02:14.507001 sshd[1685]: Accepted publickey for core from 147.75.109.163 port 50844 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:02:14.510341 sshd[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:02:14.517232 systemd-logind[1457]: New session 5 of user core. Sep 13 00:02:14.525809 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 13 00:02:15.035254 sudo[1706]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 00:02:15.035603 sudo[1706]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:02:15.052948 sudo[1706]: pam_unix(sudo:session): session closed for user root Sep 13 00:02:15.212875 sshd[1685]: pam_unix(sshd:session): session closed for user core Sep 13 00:02:15.218801 systemd[1]: sshd@4-91.99.150.175:22-147.75.109.163:50844.service: Deactivated successfully. Sep 13 00:02:15.221745 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:02:15.223053 systemd-logind[1457]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:02:15.224533 systemd-logind[1457]: Removed session 5. Sep 13 00:02:15.387929 systemd[1]: Started sshd@5-91.99.150.175:22-147.75.109.163:50846.service - OpenSSH per-connection server daemon (147.75.109.163:50846). Sep 13 00:02:16.374019 sshd[1711]: Accepted publickey for core from 147.75.109.163 port 50846 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:02:16.376520 sshd[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:02:16.383103 systemd-logind[1457]: New session 6 of user core. Sep 13 00:02:16.388880 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 13 00:02:16.894531 sudo[1715]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 00:02:16.894967 sudo[1715]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:02:16.899436 sudo[1715]: pam_unix(sudo:session): session closed for user root Sep 13 00:02:16.906160 sudo[1714]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 13 00:02:16.906494 sudo[1714]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:02:16.928042 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 13 00:02:16.930125 auditctl[1718]: No rules Sep 13 00:02:16.931030 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:02:16.932623 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 13 00:02:16.940382 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:02:16.968807 augenrules[1736]: No rules Sep 13 00:02:16.970683 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:02:16.973994 sudo[1714]: pam_unix(sudo:session): session closed for user root Sep 13 00:02:17.132894 sshd[1711]: pam_unix(sshd:session): session closed for user core Sep 13 00:02:17.138280 systemd[1]: sshd@5-91.99.150.175:22-147.75.109.163:50846.service: Deactivated successfully. Sep 13 00:02:17.141276 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:02:17.143389 systemd-logind[1457]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:02:17.145110 systemd-logind[1457]: Removed session 6. Sep 13 00:02:17.312354 systemd[1]: Started sshd@6-91.99.150.175:22-147.75.109.163:50848.service - OpenSSH per-connection server daemon (147.75.109.163:50848). Sep 13 00:02:18.298803 sshd[1744]: Accepted publickey for core from 147.75.109.163 port 50848 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:02:18.301116 sshd[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:02:18.306384 systemd-logind[1457]: New session 7 of user core. Sep 13 00:02:18.318946 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 13 00:02:18.824011 sudo[1747]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:02:18.824275 sudo[1747]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:02:19.127128 (dockerd)[1763]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 13 00:02:19.127718 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 13 00:02:19.375668 dockerd[1763]: time="2025-09-13T00:02:19.375223136Z" level=info msg="Starting up" Sep 13 00:02:19.476978 dockerd[1763]: time="2025-09-13T00:02:19.476022289Z" level=info msg="Loading containers: start." Sep 13 00:02:19.590612 kernel: Initializing XFRM netlink socket Sep 13 00:02:19.681499 systemd-networkd[1377]: docker0: Link UP Sep 13 00:02:19.706920 dockerd[1763]: time="2025-09-13T00:02:19.706847019Z" level=info msg="Loading containers: done." Sep 13 00:02:19.724073 dockerd[1763]: time="2025-09-13T00:02:19.724009695Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:02:19.724232 dockerd[1763]: time="2025-09-13T00:02:19.724140136Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 13 00:02:19.724334 dockerd[1763]: time="2025-09-13T00:02:19.724281257Z" level=info msg="Daemon has completed initialization" Sep 13 00:02:19.765772 dockerd[1763]: time="2025-09-13T00:02:19.764858145Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:02:19.765068 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 13 00:02:20.447739 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3185471997-merged.mount: Deactivated successfully. Sep 13 00:02:20.814958 containerd[1478]: time="2025-09-13T00:02:20.814845313Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 13 00:02:21.471366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1950802666.mount: Deactivated successfully. Sep 13 00:02:22.331449 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 13 00:02:22.338814 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:02:22.451780 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:02:22.464187 (kubelet)[1964]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:02:22.513242 kubelet[1964]: E0913 00:02:22.513126 1964 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:02:22.516421 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:02:22.516628 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:02:23.539572 containerd[1478]: time="2025-09-13T00:02:23.539486258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:23.541075 containerd[1478]: time="2025-09-13T00:02:23.540710786Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=25687423" Sep 13 00:02:23.542838 containerd[1478]: time="2025-09-13T00:02:23.541984035Z" level=info msg="ImageCreate event name:\"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:23.548194 containerd[1478]: time="2025-09-13T00:02:23.546739709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:23.548493 containerd[1478]: time="2025-09-13T00:02:23.548203999Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"25683924\" in 2.733304286s" Sep 13 00:02:23.548493 containerd[1478]: time="2025-09-13T00:02:23.548488321Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\"" Sep 13 00:02:23.551445 containerd[1478]: time="2025-09-13T00:02:23.551375981Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 13 00:02:25.164638 containerd[1478]: time="2025-09-13T00:02:25.163491564Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:25.166578 containerd[1478]: time="2025-09-13T00:02:25.166265061Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=22459787" Sep 13 00:02:25.168178 containerd[1478]: time="2025-09-13T00:02:25.168093832Z" level=info msg="ImageCreate event name:\"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:25.174445 containerd[1478]: time="2025-09-13T00:02:25.174384031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:25.177938 containerd[1478]: time="2025-09-13T00:02:25.177285968Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"24028542\" in 1.625573145s" Sep 13 00:02:25.177938 containerd[1478]: time="2025-09-13T00:02:25.177371969Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\"" Sep 13 00:02:25.178222 containerd[1478]: time="2025-09-13T00:02:25.178176774Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 13 00:02:26.984604 containerd[1478]: time="2025-09-13T00:02:26.984211867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:26.986728 containerd[1478]: time="2025-09-13T00:02:26.986055437Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=17127526" Sep 13 00:02:26.988575 containerd[1478]: time="2025-09-13T00:02:26.987715607Z" level=info msg="ImageCreate event name:\"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:26.993418 containerd[1478]: time="2025-09-13T00:02:26.992394514Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:26.994181 containerd[1478]: time="2025-09-13T00:02:26.994133284Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"18696299\" in 1.81590403s" Sep 13 00:02:26.994181 containerd[1478]: time="2025-09-13T00:02:26.994174524Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\"" Sep 13 00:02:26.995677 containerd[1478]: time="2025-09-13T00:02:26.995641373Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 13 00:02:28.030533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3353953629.mount: Deactivated successfully. Sep 13 00:02:28.370326 containerd[1478]: time="2025-09-13T00:02:28.368621990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:28.371159 containerd[1478]: time="2025-09-13T00:02:28.371118403Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=26954933" Sep 13 00:02:28.372188 containerd[1478]: time="2025-09-13T00:02:28.372141448Z" level=info msg="ImageCreate event name:\"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:28.376092 containerd[1478]: time="2025-09-13T00:02:28.375785746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:28.377094 containerd[1478]: time="2025-09-13T00:02:28.377028193Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"26953926\" in 1.38122766s" Sep 13 00:02:28.377094 containerd[1478]: time="2025-09-13T00:02:28.377085113Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\"" Sep 13 00:02:28.379464 containerd[1478]: time="2025-09-13T00:02:28.379123843Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:02:28.978783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3217555092.mount: Deactivated successfully. Sep 13 00:02:29.669572 containerd[1478]: time="2025-09-13T00:02:29.669490249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:29.671807 containerd[1478]: time="2025-09-13T00:02:29.671731179Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Sep 13 00:02:29.672752 containerd[1478]: time="2025-09-13T00:02:29.672678824Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:29.677282 containerd[1478]: time="2025-09-13T00:02:29.677212686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:29.679303 containerd[1478]: time="2025-09-13T00:02:29.678738093Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.29955805s" Sep 13 00:02:29.679303 containerd[1478]: time="2025-09-13T00:02:29.678784933Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 13 00:02:29.679657 containerd[1478]: time="2025-09-13T00:02:29.679630737Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:02:30.232184 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2700713530.mount: Deactivated successfully. Sep 13 00:02:30.239944 containerd[1478]: time="2025-09-13T00:02:30.238960403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:30.240156 containerd[1478]: time="2025-09-13T00:02:30.240130168Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Sep 13 00:02:30.241327 containerd[1478]: time="2025-09-13T00:02:30.241259613Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:30.244181 containerd[1478]: time="2025-09-13T00:02:30.244131746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:30.245230 containerd[1478]: time="2025-09-13T00:02:30.245194671Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 565.457174ms" Sep 13 00:02:30.245375 containerd[1478]: time="2025-09-13T00:02:30.245355351Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 13 00:02:30.246074 containerd[1478]: time="2025-09-13T00:02:30.246004394Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 13 00:02:30.838812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3947350119.mount: Deactivated successfully. Sep 13 00:02:32.581927 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Sep 13 00:02:32.588967 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:02:32.712754 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:02:32.721868 (kubelet)[2108]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:02:32.773161 kubelet[2108]: E0913 00:02:32.772655 2108 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:02:32.777942 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:02:32.778084 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:02:33.392675 containerd[1478]: time="2025-09-13T00:02:33.392358285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:33.395102 containerd[1478]: time="2025-09-13T00:02:33.395044934Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537235" Sep 13 00:02:33.410509 containerd[1478]: time="2025-09-13T00:02:33.410407991Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:33.419329 containerd[1478]: time="2025-09-13T00:02:33.419190943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:33.421392 containerd[1478]: time="2025-09-13T00:02:33.421179230Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.174968515s" Sep 13 00:02:33.421392 containerd[1478]: time="2025-09-13T00:02:33.421234231Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 13 00:02:38.311670 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:02:38.319091 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:02:38.360156 systemd[1]: Reloading requested from client PID 2143 ('systemctl') (unit session-7.scope)... Sep 13 00:02:38.360331 systemd[1]: Reloading... Sep 13 00:02:38.485582 zram_generator::config[2186]: No configuration found. Sep 13 00:02:38.588486 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:02:38.662722 systemd[1]: Reloading finished in 301 ms. Sep 13 00:02:38.729900 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 00:02:38.730366 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 00:02:38.731136 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:02:38.739582 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:02:38.916643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:02:38.928302 (kubelet)[2230]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:02:38.971578 kubelet[2230]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:02:38.971578 kubelet[2230]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:02:38.971578 kubelet[2230]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:02:38.971578 kubelet[2230]: I0913 00:02:38.970425 2230 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:02:40.006189 kubelet[2230]: I0913 00:02:40.006139 2230 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:02:40.008024 kubelet[2230]: I0913 00:02:40.006622 2230 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:02:40.008024 kubelet[2230]: I0913 00:02:40.007029 2230 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:02:40.034808 kubelet[2230]: E0913 00:02:40.034754 2230 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://91.99.150.175:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 91.99.150.175:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:02:40.036192 kubelet[2230]: I0913 00:02:40.036159 2230 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:02:40.045152 kubelet[2230]: E0913 00:02:40.045100 2230 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:02:40.045152 kubelet[2230]: I0913 00:02:40.045145 2230 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:02:40.053723 kubelet[2230]: I0913 00:02:40.053669 2230 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:02:40.054089 kubelet[2230]: I0913 00:02:40.054065 2230 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:02:40.054434 kubelet[2230]: I0913 00:02:40.054378 2230 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:02:40.054818 kubelet[2230]: I0913 00:02:40.054439 2230 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-n-dc9d7711ed","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:02:40.054967 kubelet[2230]: I0913 00:02:40.054891 2230 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:02:40.054967 kubelet[2230]: I0913 00:02:40.054939 2230 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:02:40.055309 kubelet[2230]: I0913 00:02:40.055280 2230 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:02:40.063518 kubelet[2230]: I0913 00:02:40.063136 2230 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:02:40.063518 kubelet[2230]: I0913 00:02:40.063196 2230 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:02:40.063518 kubelet[2230]: I0913 00:02:40.063227 2230 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:02:40.063518 kubelet[2230]: I0913 00:02:40.063262 2230 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:02:40.069251 kubelet[2230]: W0913 00:02:40.069037 2230 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://91.99.150.175:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-dc9d7711ed&limit=500&resourceVersion=0": dial tcp 91.99.150.175:6443: connect: connection refused Sep 13 00:02:40.069251 kubelet[2230]: E0913 00:02:40.069120 2230 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://91.99.150.175:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-dc9d7711ed&limit=500&resourceVersion=0\": dial tcp 91.99.150.175:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:02:40.070591 kubelet[2230]: W0913 00:02:40.069778 2230 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://91.99.150.175:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 91.99.150.175:6443: connect: connection refused Sep 13 00:02:40.070591 kubelet[2230]: E0913 00:02:40.069846 2230 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://91.99.150.175:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 91.99.150.175:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:02:40.070591 kubelet[2230]: I0913 00:02:40.070100 2230 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:02:40.071149 kubelet[2230]: I0913 00:02:40.071117 2230 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:02:40.071296 kubelet[2230]: W0913 00:02:40.071275 2230 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:02:40.074379 kubelet[2230]: I0913 00:02:40.074296 2230 server.go:1274] "Started kubelet" Sep 13 00:02:40.081209 kubelet[2230]: I0913 00:02:40.081160 2230 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:02:40.083422 kubelet[2230]: I0913 00:02:40.083386 2230 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:02:40.084952 kubelet[2230]: E0913 00:02:40.083675 2230 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://91.99.150.175:6443/api/v1/namespaces/default/events\": dial tcp 91.99.150.175:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-5-n-dc9d7711ed.1864aea0c3338f08 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-5-n-dc9d7711ed,UID:ci-4081-3-5-n-dc9d7711ed,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-n-dc9d7711ed,},FirstTimestamp:2025-09-13 00:02:40.07427252 +0000 UTC m=+1.141071164,LastTimestamp:2025-09-13 00:02:40.07427252 +0000 UTC m=+1.141071164,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-n-dc9d7711ed,}" Sep 13 00:02:40.088223 kubelet[2230]: I0913 00:02:40.086682 2230 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:02:40.088223 kubelet[2230]: I0913 00:02:40.081373 2230 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:02:40.088223 kubelet[2230]: I0913 00:02:40.087217 2230 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:02:40.088223 kubelet[2230]: I0913 00:02:40.087382 2230 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:02:40.088492 kubelet[2230]: I0913 00:02:40.088463 2230 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:02:40.088867 kubelet[2230]: E0913 00:02:40.088842 2230 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-dc9d7711ed\" not found" Sep 13 00:02:40.090852 kubelet[2230]: I0913 00:02:40.089380 2230 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:02:40.090967 kubelet[2230]: I0913 00:02:40.089436 2230 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:02:40.091029 kubelet[2230]: W0913 00:02:40.089858 2230 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://91.99.150.175:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.99.150.175:6443: connect: connection refused Sep 13 00:02:40.091115 kubelet[2230]: E0913 00:02:40.091095 2230 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://91.99.150.175:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 91.99.150.175:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:02:40.091164 kubelet[2230]: E0913 00:02:40.089921 2230 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.150.175:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-dc9d7711ed?timeout=10s\": dial tcp 91.99.150.175:6443: connect: connection refused" interval="200ms" Sep 13 00:02:40.091916 kubelet[2230]: I0913 00:02:40.091896 2230 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:02:40.093016 kubelet[2230]: E0913 00:02:40.092943 2230 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:02:40.093405 kubelet[2230]: I0913 00:02:40.093354 2230 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:02:40.096893 kubelet[2230]: I0913 00:02:40.096855 2230 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:02:40.114268 kubelet[2230]: I0913 00:02:40.114220 2230 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:02:40.114268 kubelet[2230]: I0913 00:02:40.114274 2230 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:02:40.114400 kubelet[2230]: I0913 00:02:40.114295 2230 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:02:40.114610 kubelet[2230]: I0913 00:02:40.114586 2230 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:02:40.116966 kubelet[2230]: I0913 00:02:40.116942 2230 policy_none.go:49] "None policy: Start" Sep 13 00:02:40.117516 kubelet[2230]: I0913 00:02:40.117459 2230 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:02:40.117516 kubelet[2230]: I0913 00:02:40.117483 2230 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:02:40.117661 kubelet[2230]: I0913 00:02:40.117650 2230 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:02:40.117759 kubelet[2230]: E0913 00:02:40.117741 2230 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:02:40.119393 kubelet[2230]: W0913 00:02:40.119353 2230 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://91.99.150.175:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.99.150.175:6443: connect: connection refused Sep 13 00:02:40.119471 kubelet[2230]: E0913 00:02:40.119395 2230 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://91.99.150.175:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 91.99.150.175:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:02:40.119471 kubelet[2230]: I0913 00:02:40.119463 2230 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:02:40.119522 kubelet[2230]: I0913 00:02:40.119480 2230 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:02:40.127377 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 13 00:02:40.137675 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 13 00:02:40.141500 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 13 00:02:40.156589 kubelet[2230]: I0913 00:02:40.156452 2230 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:02:40.157419 kubelet[2230]: I0913 00:02:40.157014 2230 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:02:40.157419 kubelet[2230]: I0913 00:02:40.157047 2230 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:02:40.158718 kubelet[2230]: I0913 00:02:40.158691 2230 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:02:40.161455 kubelet[2230]: E0913 00:02:40.161433 2230 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-5-n-dc9d7711ed\" not found" Sep 13 00:02:40.231702 systemd[1]: Created slice kubepods-burstable-pod523c0dc250fd7264727c483a4de83a13.slice - libcontainer container kubepods-burstable-pod523c0dc250fd7264727c483a4de83a13.slice. Sep 13 00:02:40.248585 systemd[1]: Created slice kubepods-burstable-pod8c3e46d3744169c31104c1ba343487b9.slice - libcontainer container kubepods-burstable-pod8c3e46d3744169c31104c1ba343487b9.slice. Sep 13 00:02:40.254648 systemd[1]: Created slice kubepods-burstable-podc1ec4ef128dc8c39fce2c11f464cd2d1.slice - libcontainer container kubepods-burstable-podc1ec4ef128dc8c39fce2c11f464cd2d1.slice. Sep 13 00:02:40.259666 kubelet[2230]: I0913 00:02:40.259142 2230 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-dc9d7711ed" Sep 13 00:02:40.260254 kubelet[2230]: E0913 00:02:40.260207 2230 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://91.99.150.175:6443/api/v1/nodes\": dial tcp 91.99.150.175:6443: connect: connection refused" node="ci-4081-3-5-n-dc9d7711ed" Sep 13 00:02:40.292588 kubelet[2230]: E0913 00:02:40.292470 2230 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.150.175:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-dc9d7711ed?timeout=10s\": dial tcp 91.99.150.175:6443: connect: connection refused" interval="400ms" Sep 13 00:02:40.293418 kubelet[2230]: I0913 00:02:40.292951 2230 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/523c0dc250fd7264727c483a4de83a13-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-n-dc9d7711ed\" (UID: \"523c0dc250fd7264727c483a4de83a13\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-dc9d7711ed" Sep 13 00:02:40.293418 kubelet[2230]: I0913 00:02:40.293052 2230 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8c3e46d3744169c31104c1ba343487b9-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-dc9d7711ed\" (UID: \"8c3e46d3744169c31104c1ba343487b9\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-dc9d7711ed" Sep 13 00:02:40.293418 kubelet[2230]: I0913 00:02:40.293105 2230 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8c3e46d3744169c31104c1ba343487b9-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-dc9d7711ed\" (UID: \"8c3e46d3744169c31104c1ba343487b9\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-dc9d7711ed" Sep 13 00:02:40.293418 kubelet[2230]: I0913 00:02:40.293145 2230 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8c3e46d3744169c31104c1ba343487b9-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-n-dc9d7711ed\" (UID: \"8c3e46d3744169c31104c1ba343487b9\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-dc9d7711ed" Sep 13 00:02:40.293418 kubelet[2230]: I0913 00:02:40.293184 2230 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8c3e46d3744169c31104c1ba343487b9-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-n-dc9d7711ed\" (UID: \"8c3e46d3744169c31104c1ba343487b9\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-dc9d7711ed" Sep 13 00:02:40.293741 kubelet[2230]: I0913 00:02:40.293224 2230 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8c3e46d3744169c31104c1ba343487b9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-n-dc9d7711ed\" (UID: \"8c3e46d3744169c31104c1ba343487b9\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-dc9d7711ed" Sep 13 00:02:40.293741 kubelet[2230]: I0913 00:02:40.293310 2230 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c1ec4ef128dc8c39fce2c11f464cd2d1-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-n-dc9d7711ed\" (UID: \"c1ec4ef128dc8c39fce2c11f464cd2d1\") " pod="kube-system/kube-scheduler-ci-4081-3-5-n-dc9d7711ed" Sep 13 00:02:40.293741 kubelet[2230]: I0913 00:02:40.293347 2230 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/523c0dc250fd7264727c483a4de83a13-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-n-dc9d7711ed\" (UID: \"523c0dc250fd7264727c483a4de83a13\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-dc9d7711ed" Sep 13 00:02:40.293741 kubelet[2230]: I0913 00:02:40.293384 2230 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/523c0dc250fd7264727c483a4de83a13-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-n-dc9d7711ed\" (UID: \"523c0dc250fd7264727c483a4de83a13\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-dc9d7711ed" Sep 13 00:02:40.463464 kubelet[2230]: I0913 00:02:40.463410 2230 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-dc9d7711ed" Sep 13 00:02:40.464037 kubelet[2230]: E0913 00:02:40.463963 2230 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://91.99.150.175:6443/api/v1/nodes\": dial tcp 91.99.150.175:6443: connect: connection refused" node="ci-4081-3-5-n-dc9d7711ed" Sep 13 00:02:40.543572 containerd[1478]: time="2025-09-13T00:02:40.543321655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-n-dc9d7711ed,Uid:523c0dc250fd7264727c483a4de83a13,Namespace:kube-system,Attempt:0,}" Sep 13 00:02:40.553732 containerd[1478]: time="2025-09-13T00:02:40.553622159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-n-dc9d7711ed,Uid:8c3e46d3744169c31104c1ba343487b9,Namespace:kube-system,Attempt:0,}" Sep 13 00:02:40.558493 containerd[1478]: time="2025-09-13T00:02:40.558284010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-n-dc9d7711ed,Uid:c1ec4ef128dc8c39fce2c11f464cd2d1,Namespace:kube-system,Attempt:0,}" Sep 13 00:02:40.694296 kubelet[2230]: E0913 00:02:40.694249 2230 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.150.175:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-dc9d7711ed?timeout=10s\": dial tcp 91.99.150.175:6443: connect: connection refused" interval="800ms" Sep 13 00:02:40.866989 kubelet[2230]: I0913 00:02:40.866919 2230 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-dc9d7711ed" Sep 13 00:02:40.867469 kubelet[2230]: E0913 00:02:40.867394 2230 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://91.99.150.175:6443/api/v1/nodes\": dial tcp 91.99.150.175:6443: connect: connection refused" node="ci-4081-3-5-n-dc9d7711ed" Sep 13 00:02:41.048150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3346184809.mount: Deactivated successfully. Sep 13 00:02:41.060571 containerd[1478]: time="2025-09-13T00:02:41.058431970Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:02:41.060571 containerd[1478]: time="2025-09-13T00:02:41.059780893Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:02:41.061402 containerd[1478]: time="2025-09-13T00:02:41.061329857Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Sep 13 00:02:41.061634 containerd[1478]: time="2025-09-13T00:02:41.061601457Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:02:41.063569 containerd[1478]: time="2025-09-13T00:02:41.062330539Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:02:41.063569 containerd[1478]: time="2025-09-13T00:02:41.063475581Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:02:41.064124 containerd[1478]: time="2025-09-13T00:02:41.064083903Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:02:41.067930 containerd[1478]: time="2025-09-13T00:02:41.067835271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:02:41.077485 containerd[1478]: time="2025-09-13T00:02:41.077432132Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 533.997556ms" Sep 13 00:02:41.078184 containerd[1478]: time="2025-09-13T00:02:41.078127013Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 519.759683ms" Sep 13 00:02:41.083419 containerd[1478]: time="2025-09-13T00:02:41.083152824Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 529.432064ms" Sep 13 00:02:41.097809 kubelet[2230]: W0913 00:02:41.096014 2230 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://91.99.150.175:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-dc9d7711ed&limit=500&resourceVersion=0": dial tcp 91.99.150.175:6443: connect: connection refused Sep 13 00:02:41.097809 kubelet[2230]: E0913 00:02:41.096094 2230 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://91.99.150.175:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-dc9d7711ed&limit=500&resourceVersion=0\": dial tcp 91.99.150.175:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:02:41.097809 kubelet[2230]: W0913 00:02:41.096147 2230 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://91.99.150.175:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.99.150.175:6443: connect: connection refused Sep 13 00:02:41.097809 kubelet[2230]: E0913 00:02:41.096164 2230 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://91.99.150.175:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 91.99.150.175:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:02:41.197910 containerd[1478]: time="2025-09-13T00:02:41.196304392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:02:41.197910 containerd[1478]: time="2025-09-13T00:02:41.196369512Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:02:41.197910 containerd[1478]: time="2025-09-13T00:02:41.196398912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:02:41.197910 containerd[1478]: time="2025-09-13T00:02:41.197159474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:02:41.203407 containerd[1478]: time="2025-09-13T00:02:41.203239727Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:02:41.203407 containerd[1478]: time="2025-09-13T00:02:41.203304967Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:02:41.203719 containerd[1478]: time="2025-09-13T00:02:41.203321927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:02:41.205334 containerd[1478]: time="2025-09-13T00:02:41.204403050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:02:41.206648 containerd[1478]: time="2025-09-13T00:02:41.206358014Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:02:41.206648 containerd[1478]: time="2025-09-13T00:02:41.206424174Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:02:41.206648 containerd[1478]: time="2025-09-13T00:02:41.206438334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:02:41.206648 containerd[1478]: time="2025-09-13T00:02:41.206526734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:02:41.237658 systemd[1]: Started cri-containerd-85a76399766b50a6e594b37d3e8c4195eaf69956ac017625bbeeb39f0fbb6520.scope - libcontainer container 85a76399766b50a6e594b37d3e8c4195eaf69956ac017625bbeeb39f0fbb6520. Sep 13 00:02:41.238935 systemd[1]: Started cri-containerd-dce61ea7ba8264cb5db878784e69d4d2ece36348a7be12f646efdaa800229043.scope - libcontainer container dce61ea7ba8264cb5db878784e69d4d2ece36348a7be12f646efdaa800229043. Sep 13 00:02:41.247366 systemd[1]: Started cri-containerd-6c7d14df2c425a69ef7734f9dbf22d9803f51d24e1834a84689a8952cc9c20d5.scope - libcontainer container 6c7d14df2c425a69ef7734f9dbf22d9803f51d24e1834a84689a8952cc9c20d5. Sep 13 00:02:41.305088 containerd[1478]: time="2025-09-13T00:02:41.305015750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-n-dc9d7711ed,Uid:523c0dc250fd7264727c483a4de83a13,Namespace:kube-system,Attempt:0,} returns sandbox id \"dce61ea7ba8264cb5db878784e69d4d2ece36348a7be12f646efdaa800229043\"" Sep 13 00:02:41.309928 containerd[1478]: time="2025-09-13T00:02:41.309873721Z" level=info msg="CreateContainer within sandbox \"dce61ea7ba8264cb5db878784e69d4d2ece36348a7be12f646efdaa800229043\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:02:41.317495 containerd[1478]: time="2025-09-13T00:02:41.317446537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-n-dc9d7711ed,Uid:8c3e46d3744169c31104c1ba343487b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c7d14df2c425a69ef7734f9dbf22d9803f51d24e1834a84689a8952cc9c20d5\"" Sep 13 00:02:41.318570 containerd[1478]: time="2025-09-13T00:02:41.318503740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-n-dc9d7711ed,Uid:c1ec4ef128dc8c39fce2c11f464cd2d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"85a76399766b50a6e594b37d3e8c4195eaf69956ac017625bbeeb39f0fbb6520\"" Sep 13 00:02:41.322353 containerd[1478]: time="2025-09-13T00:02:41.322168508Z" level=info msg="CreateContainer within sandbox \"6c7d14df2c425a69ef7734f9dbf22d9803f51d24e1834a84689a8952cc9c20d5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:02:41.322625 containerd[1478]: time="2025-09-13T00:02:41.322496948Z" level=info msg="CreateContainer within sandbox \"85a76399766b50a6e594b37d3e8c4195eaf69956ac017625bbeeb39f0fbb6520\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:02:41.345368 containerd[1478]: time="2025-09-13T00:02:41.345313318Z" level=info msg="CreateContainer within sandbox \"85a76399766b50a6e594b37d3e8c4195eaf69956ac017625bbeeb39f0fbb6520\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b89ccb613029a8a87f8b39956f0918810094bde6a6497ed48e9c3dcba431fb52\"" Sep 13 00:02:41.351587 containerd[1478]: time="2025-09-13T00:02:41.351331332Z" level=info msg="CreateContainer within sandbox \"dce61ea7ba8264cb5db878784e69d4d2ece36348a7be12f646efdaa800229043\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ce5139c6b46951d754ecda75f67974441c924015a71bdf360c5a2f089466009b\"" Sep 13 00:02:41.352046 containerd[1478]: time="2025-09-13T00:02:41.352007573Z" level=info msg="StartContainer for \"b89ccb613029a8a87f8b39956f0918810094bde6a6497ed48e9c3dcba431fb52\"" Sep 13 00:02:41.354386 containerd[1478]: time="2025-09-13T00:02:41.354316418Z" level=info msg="CreateContainer within sandbox \"6c7d14df2c425a69ef7734f9dbf22d9803f51d24e1834a84689a8952cc9c20d5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9edbb78cf5f27ec30f6d332fc63659841762ffb65c290ffe26a69a72ea875690\"" Sep 13 00:02:41.357471 containerd[1478]: time="2025-09-13T00:02:41.357375865Z" level=info msg="StartContainer for \"ce5139c6b46951d754ecda75f67974441c924015a71bdf360c5a2f089466009b\"" Sep 13 00:02:41.363951 containerd[1478]: time="2025-09-13T00:02:41.363911079Z" level=info msg="StartContainer for \"9edbb78cf5f27ec30f6d332fc63659841762ffb65c290ffe26a69a72ea875690\"" Sep 13 00:02:41.387147 systemd[1]: Started cri-containerd-b89ccb613029a8a87f8b39956f0918810094bde6a6497ed48e9c3dcba431fb52.scope - libcontainer container b89ccb613029a8a87f8b39956f0918810094bde6a6497ed48e9c3dcba431fb52. Sep 13 00:02:41.403573 systemd[1]: Started cri-containerd-ce5139c6b46951d754ecda75f67974441c924015a71bdf360c5a2f089466009b.scope - libcontainer container ce5139c6b46951d754ecda75f67974441c924015a71bdf360c5a2f089466009b. Sep 13 00:02:41.409829 systemd[1]: Started cri-containerd-9edbb78cf5f27ec30f6d332fc63659841762ffb65c290ffe26a69a72ea875690.scope - libcontainer container 9edbb78cf5f27ec30f6d332fc63659841762ffb65c290ffe26a69a72ea875690. Sep 13 00:02:41.471607 containerd[1478]: time="2025-09-13T00:02:41.471453994Z" level=info msg="StartContainer for \"9edbb78cf5f27ec30f6d332fc63659841762ffb65c290ffe26a69a72ea875690\" returns successfully" Sep 13 00:02:41.471607 containerd[1478]: time="2025-09-13T00:02:41.471473475Z" level=info msg="StartContainer for \"b89ccb613029a8a87f8b39956f0918810094bde6a6497ed48e9c3dcba431fb52\" returns successfully" Sep 13 00:02:41.479491 containerd[1478]: time="2025-09-13T00:02:41.479014211Z" level=info msg="StartContainer for \"ce5139c6b46951d754ecda75f67974441c924015a71bdf360c5a2f089466009b\" returns successfully" Sep 13 00:02:41.495304 kubelet[2230]: E0913 00:02:41.495242 2230 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.150.175:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-dc9d7711ed?timeout=10s\": dial tcp 91.99.150.175:6443: connect: connection refused" interval="1.6s" Sep 13 00:02:41.571605 kubelet[2230]: W0913 00:02:41.569534 2230 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://91.99.150.175:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 91.99.150.175:6443: connect: connection refused Sep 13 00:02:41.571605 kubelet[2230]: E0913 00:02:41.571571 2230 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://91.99.150.175:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 91.99.150.175:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:02:41.605838 kubelet[2230]: W0913 00:02:41.605731 2230 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://91.99.150.175:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.99.150.175:6443: connect: connection refused Sep 13 00:02:41.605838 kubelet[2230]: E0913 00:02:41.605807 2230 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://91.99.150.175:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 91.99.150.175:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:02:41.671534 kubelet[2230]: I0913 00:02:41.670730 2230 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-dc9d7711ed" Sep 13 00:02:43.777784 kubelet[2230]: E0913 00:02:43.777727 2230 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-5-n-dc9d7711ed\" not found" node="ci-4081-3-5-n-dc9d7711ed" Sep 13 00:02:43.809748 kubelet[2230]: I0913 00:02:43.809706 2230 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-5-n-dc9d7711ed" Sep 13 00:02:43.809748 kubelet[2230]: E0913 00:02:43.809754 2230 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081-3-5-n-dc9d7711ed\": node \"ci-4081-3-5-n-dc9d7711ed\" not found" Sep 13 00:02:44.071980 kubelet[2230]: I0913 00:02:44.071871 2230 apiserver.go:52] "Watching apiserver" Sep 13 00:02:44.091394 kubelet[2230]: I0913 00:02:44.091344 2230 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:02:45.877592 systemd[1]: Reloading requested from client PID 2505 ('systemctl') (unit session-7.scope)... Sep 13 00:02:45.877609 systemd[1]: Reloading... Sep 13 00:02:45.971586 zram_generator::config[2545]: No configuration found. Sep 13 00:02:46.094841 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:02:46.187667 systemd[1]: Reloading finished in 309 ms. Sep 13 00:02:46.233020 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:02:46.249068 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:02:46.249737 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:02:46.249948 systemd[1]: kubelet.service: Consumed 1.576s CPU time, 127.4M memory peak, 0B memory swap peak. Sep 13 00:02:46.258477 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:02:46.429809 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:02:46.447403 (kubelet)[2590]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:02:46.520006 kubelet[2590]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:02:46.520006 kubelet[2590]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:02:46.520006 kubelet[2590]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:02:46.520758 kubelet[2590]: I0913 00:02:46.520273 2590 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:02:46.531467 kubelet[2590]: I0913 00:02:46.531425 2590 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:02:46.532727 kubelet[2590]: I0913 00:02:46.531632 2590 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:02:46.532727 kubelet[2590]: I0913 00:02:46.531874 2590 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:02:46.534755 kubelet[2590]: I0913 00:02:46.534724 2590 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:02:46.539654 kubelet[2590]: I0913 00:02:46.539530 2590 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:02:46.543461 kubelet[2590]: E0913 00:02:46.543417 2590 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:02:46.543606 kubelet[2590]: I0913 00:02:46.543591 2590 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:02:46.545819 kubelet[2590]: I0913 00:02:46.545802 2590 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:02:46.546034 kubelet[2590]: I0913 00:02:46.546022 2590 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:02:46.546326 kubelet[2590]: I0913 00:02:46.546294 2590 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:02:46.546603 kubelet[2590]: I0913 00:02:46.546390 2590 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-n-dc9d7711ed","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:02:46.546734 kubelet[2590]: I0913 00:02:46.546721 2590 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:02:46.546796 kubelet[2590]: I0913 00:02:46.546788 2590 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:02:46.546878 kubelet[2590]: I0913 00:02:46.546870 2590 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:02:46.547039 kubelet[2590]: I0913 00:02:46.547027 2590 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:02:46.547108 kubelet[2590]: I0913 00:02:46.547099 2590 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:02:46.547179 kubelet[2590]: I0913 00:02:46.547170 2590 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:02:46.547252 kubelet[2590]: I0913 00:02:46.547243 2590 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:02:46.568792 kubelet[2590]: I0913 00:02:46.568754 2590 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:02:46.569639 kubelet[2590]: I0913 00:02:46.569611 2590 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:02:46.570510 kubelet[2590]: I0913 00:02:46.570492 2590 server.go:1274] "Started kubelet" Sep 13 00:02:46.573765 kubelet[2590]: I0913 00:02:46.573426 2590 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:02:46.578400 kubelet[2590]: I0913 00:02:46.578370 2590 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:02:46.578600 kubelet[2590]: I0913 00:02:46.578465 2590 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:02:46.581275 kubelet[2590]: I0913 00:02:46.581224 2590 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:02:46.582569 kubelet[2590]: I0913 00:02:46.582381 2590 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:02:46.582666 kubelet[2590]: I0913 00:02:46.582645 2590 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:02:46.585311 kubelet[2590]: I0913 00:02:46.585287 2590 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:02:46.588885 kubelet[2590]: I0913 00:02:46.588643 2590 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:02:46.588885 kubelet[2590]: I0913 00:02:46.588785 2590 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:02:46.591492 kubelet[2590]: E0913 00:02:46.590689 2590 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:02:46.591492 kubelet[2590]: I0913 00:02:46.590794 2590 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:02:46.591492 kubelet[2590]: I0913 00:02:46.590905 2590 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:02:46.596321 kubelet[2590]: I0913 00:02:46.596274 2590 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:02:46.600778 kubelet[2590]: I0913 00:02:46.600721 2590 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:02:46.605096 kubelet[2590]: I0913 00:02:46.604870 2590 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:02:46.605096 kubelet[2590]: I0913 00:02:46.605084 2590 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:02:46.605096 kubelet[2590]: I0913 00:02:46.605103 2590 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:02:46.605434 kubelet[2590]: E0913 00:02:46.605145 2590 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:02:46.648794 kubelet[2590]: I0913 00:02:46.648749 2590 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:02:46.648794 kubelet[2590]: I0913 00:02:46.648787 2590 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:02:46.649006 kubelet[2590]: I0913 00:02:46.648825 2590 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:02:46.649117 kubelet[2590]: I0913 00:02:46.649088 2590 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:02:46.649167 kubelet[2590]: I0913 00:02:46.649117 2590 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:02:46.649167 kubelet[2590]: I0913 00:02:46.649152 2590 policy_none.go:49] "None policy: Start" Sep 13 00:02:46.650734 kubelet[2590]: I0913 00:02:46.650349 2590 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:02:46.650734 kubelet[2590]: I0913 00:02:46.650395 2590 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:02:46.652818 kubelet[2590]: I0913 00:02:46.652684 2590 state_mem.go:75] "Updated machine memory state" Sep 13 00:02:46.661199 kubelet[2590]: I0913 00:02:46.660689 2590 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:02:46.661199 kubelet[2590]: I0913 00:02:46.660883 2590 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:02:46.661199 kubelet[2590]: I0913 00:02:46.660895 2590 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:02:46.661199 kubelet[2590]: I0913 00:02:46.661156 2590 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:02:46.769732 kubelet[2590]: I0913 00:02:46.768295 2590 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-dc9d7711ed" Sep 13 00:02:46.782003 kubelet[2590]: I0913 00:02:46.781973 2590 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-3-5-n-dc9d7711ed" Sep 13 00:02:46.782355 kubelet[2590]: I0913 00:02:46.782305 2590 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-5-n-dc9d7711ed" Sep 13 00:02:46.789718 kubelet[2590]: I0913 00:02:46.789672 2590 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c1ec4ef128dc8c39fce2c11f464cd2d1-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-n-dc9d7711ed\" (UID: \"c1ec4ef128dc8c39fce2c11f464cd2d1\") " pod="kube-system/kube-scheduler-ci-4081-3-5-n-dc9d7711ed" Sep 13 00:02:46.789718 kubelet[2590]: I0913 00:02:46.789715 2590 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/523c0dc250fd7264727c483a4de83a13-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-n-dc9d7711ed\" (UID: \"523c0dc250fd7264727c483a4de83a13\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-dc9d7711ed" Sep 13 00:02:46.789968 kubelet[2590]: I0913 00:02:46.789741 2590 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8c3e46d3744169c31104c1ba343487b9-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-dc9d7711ed\" (UID: \"8c3e46d3744169c31104c1ba343487b9\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-dc9d7711ed" Sep 13 00:02:46.789968 kubelet[2590]: I0913 00:02:46.789763 2590 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8c3e46d3744169c31104c1ba343487b9-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-n-dc9d7711ed\" (UID: \"8c3e46d3744169c31104c1ba343487b9\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-dc9d7711ed" Sep 13 00:02:46.789968 kubelet[2590]: I0913 00:02:46.789782 2590 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8c3e46d3744169c31104c1ba343487b9-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-n-dc9d7711ed\" (UID: \"8c3e46d3744169c31104c1ba343487b9\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-dc9d7711ed" Sep 13 00:02:46.789968 kubelet[2590]: I0913 00:02:46.789800 2590 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8c3e46d3744169c31104c1ba343487b9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-n-dc9d7711ed\" (UID: \"8c3e46d3744169c31104c1ba343487b9\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-dc9d7711ed" Sep 13 00:02:46.789968 kubelet[2590]: I0913 00:02:46.789819 2590 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/523c0dc250fd7264727c483a4de83a13-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-n-dc9d7711ed\" (UID: \"523c0dc250fd7264727c483a4de83a13\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-dc9d7711ed" Sep 13 00:02:46.790131 kubelet[2590]: I0913 00:02:46.789837 2590 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/523c0dc250fd7264727c483a4de83a13-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-n-dc9d7711ed\" (UID: \"523c0dc250fd7264727c483a4de83a13\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-dc9d7711ed" Sep 13 00:02:46.790131 kubelet[2590]: I0913 00:02:46.789855 2590 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8c3e46d3744169c31104c1ba343487b9-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-dc9d7711ed\" (UID: \"8c3e46d3744169c31104c1ba343487b9\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-dc9d7711ed" Sep 13 00:02:46.878755 sudo[2623]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 13 00:02:46.879045 sudo[2623]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 13 00:02:47.354311 sudo[2623]: pam_unix(sudo:session): session closed for user root Sep 13 00:02:47.550023 kubelet[2590]: I0913 00:02:47.549972 2590 apiserver.go:52] "Watching apiserver" Sep 13 00:02:47.589232 kubelet[2590]: I0913 00:02:47.589173 2590 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:02:47.640493 kubelet[2590]: E0913 00:02:47.640230 2590 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-5-n-dc9d7711ed\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-5-n-dc9d7711ed" Sep 13 00:02:47.640493 kubelet[2590]: I0913 00:02:47.640242 2590 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-5-n-dc9d7711ed" podStartSLOduration=1.6402100819999998 podStartE2EDuration="1.640210082s" podCreationTimestamp="2025-09-13 00:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:02:47.637632038 +0000 UTC m=+1.179617768" watchObservedRunningTime="2025-09-13 00:02:47.640210082 +0000 UTC m=+1.182195812" Sep 13 00:02:47.643803 kubelet[2590]: E0913 00:02:47.643566 2590 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081-3-5-n-dc9d7711ed\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-5-n-dc9d7711ed" Sep 13 00:02:47.665007 kubelet[2590]: I0913 00:02:47.664940 2590 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-5-n-dc9d7711ed" podStartSLOduration=1.664922679 podStartE2EDuration="1.664922679s" podCreationTimestamp="2025-09-13 00:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:02:47.663793077 +0000 UTC m=+1.205778847" watchObservedRunningTime="2025-09-13 00:02:47.664922679 +0000 UTC m=+1.206908409" Sep 13 00:02:47.665186 kubelet[2590]: I0913 00:02:47.665034 2590 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-5-n-dc9d7711ed" podStartSLOduration=1.665029039 podStartE2EDuration="1.665029039s" podCreationTimestamp="2025-09-13 00:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:02:47.649978696 +0000 UTC m=+1.191964386" watchObservedRunningTime="2025-09-13 00:02:47.665029039 +0000 UTC m=+1.207014769" Sep 13 00:02:49.198809 sudo[1747]: pam_unix(sudo:session): session closed for user root Sep 13 00:02:49.359691 sshd[1744]: pam_unix(sshd:session): session closed for user core Sep 13 00:02:49.366097 systemd[1]: sshd@6-91.99.150.175:22-147.75.109.163:50848.service: Deactivated successfully. Sep 13 00:02:49.368735 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:02:49.369155 systemd[1]: session-7.scope: Consumed 6.884s CPU time, 152.0M memory peak, 0B memory swap peak. Sep 13 00:02:49.370115 systemd-logind[1457]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:02:49.371392 systemd-logind[1457]: Removed session 7. Sep 13 00:02:50.664042 kubelet[2590]: I0913 00:02:50.663966 2590 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:02:50.665568 kubelet[2590]: I0913 00:02:50.665266 2590 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:02:50.666412 containerd[1478]: time="2025-09-13T00:02:50.664874052Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:02:51.424788 kubelet[2590]: W0913 00:02:51.424747 2590 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4081-3-5-n-dc9d7711ed" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-5-n-dc9d7711ed' and this object Sep 13 00:02:51.424927 kubelet[2590]: E0913 00:02:51.424797 2590 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-4081-3-5-n-dc9d7711ed\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-5-n-dc9d7711ed' and this object" logger="UnhandledError" Sep 13 00:02:51.427641 kubelet[2590]: W0913 00:02:51.427467 2590 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4081-3-5-n-dc9d7711ed" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-5-n-dc9d7711ed' and this object Sep 13 00:02:51.427641 kubelet[2590]: E0913 00:02:51.427512 2590 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4081-3-5-n-dc9d7711ed\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-5-n-dc9d7711ed' and this object" logger="UnhandledError" Sep 13 00:02:51.427641 kubelet[2590]: W0913 00:02:51.427580 2590 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4081-3-5-n-dc9d7711ed" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-5-n-dc9d7711ed' and this object Sep 13 00:02:51.427641 kubelet[2590]: E0913 00:02:51.427613 2590 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-4081-3-5-n-dc9d7711ed\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-5-n-dc9d7711ed' and this object" logger="UnhandledError" Sep 13 00:02:51.428515 systemd[1]: Created slice kubepods-besteffort-pod2d5bd958_d5eb_46f8_b014_30ec2d5a75da.slice - libcontainer container kubepods-besteffort-pod2d5bd958_d5eb_46f8_b014_30ec2d5a75da.slice. Sep 13 00:02:51.441871 systemd[1]: Created slice kubepods-burstable-poda949daff_5dad_4f8b_83c1_0800eccfea7c.slice - libcontainer container kubepods-burstable-poda949daff_5dad_4f8b_83c1_0800eccfea7c.slice. Sep 13 00:02:51.519599 kubelet[2590]: I0913 00:02:51.519496 2590 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a949daff-5dad-4f8b-83c1-0800eccfea7c-bpf-maps\") pod \"cilium-vjgsk\" (UID: \"a949daff-5dad-4f8b-83c1-0800eccfea7c\") " pod="kube-system/cilium-vjgsk" Sep 13 00:02:51.519753 kubelet[2590]: I0913 00:02:51.519608 2590 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89wrf\" (UniqueName: \"kubernetes.io/projected/2d5bd958-d5eb-46f8-b014-30ec2d5a75da-kube-api-access-89wrf\") pod \"kube-proxy-dmwqc\" (UID: \"2d5bd958-d5eb-46f8-b014-30ec2d5a75da\") " pod="kube-system/kube-proxy-dmwqc" Sep 13 00:02:51.519753 kubelet[2590]: I0913 00:02:51.519654 2590 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a949daff-5dad-4f8b-83c1-0800eccfea7c-host-proc-sys-net\") pod \"cilium-vjgsk\" (UID: \"a949daff-5dad-4f8b-83c1-0800eccfea7c\") " pod="kube-system/cilium-vjgsk" Sep 13 00:02:51.519753 kubelet[2590]: I0913 00:02:51.519691 2590 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a949daff-5dad-4f8b-83c1-0800eccfea7c-hubble-tls\") pod \"cilium-vjgsk\" (UID: \"a949daff-5dad-4f8b-83c1-0800eccfea7c\") " pod="kube-system/cilium-vjgsk" Sep 13 00:02:51.519753 kubelet[2590]: I0913 00:02:51.519725 2590 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2d5bd958-d5eb-46f8-b014-30ec2d5a75da-xtables-lock\") pod \"kube-proxy-dmwqc\" (UID: \"2d5bd958-d5eb-46f8-b014-30ec2d5a75da\") " pod="kube-system/kube-proxy-dmwqc" Sep 13 00:02:51.519842 kubelet[2590]: I0913 00:02:51.519761 2590 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a949daff-5dad-4f8b-83c1-0800eccfea7c-cilium-config-path\") pod \"cilium-vjgsk\" (UID: \"a949daff-5dad-4f8b-83c1-0800eccfea7c\") " pod="kube-system/cilium-vjgsk" Sep 13 00:02:51.519842 kubelet[2590]: I0913 00:02:51.519797 2590 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2d5bd958-d5eb-46f8-b014-30ec2d5a75da-kube-proxy\") pod \"kube-proxy-dmwqc\" (UID: \"2d5bd958-d5eb-46f8-b014-30ec2d5a75da\") " pod="kube-system/kube-proxy-dmwqc" Sep 13 00:02:51.519842 kubelet[2590]: I0913 00:02:51.519831 2590 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a949daff-5dad-4f8b-83c1-0800eccfea7c-hostproc\") pod \"cilium-vjgsk\" (UID: \"a949daff-5dad-4f8b-83c1-0800eccfea7c\") " pod="kube-system/cilium-vjgsk" Sep 13 00:02:51.519908 kubelet[2590]: I0913 00:02:51.519865 2590 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d5bd958-d5eb-46f8-b014-30ec2d5a75da-lib-modules\") pod \"kube-proxy-dmwqc\" (UID: \"2d5bd958-d5eb-46f8-b014-30ec2d5a75da\") " pod="kube-system/kube-proxy-dmwqc" Sep 13 00:02:51.520281 kubelet[2590]: I0913 00:02:51.519900 2590 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a949daff-5dad-4f8b-83c1-0800eccfea7c-clustermesh-secrets\") pod \"cilium-vjgsk\" (UID: \"a949daff-5dad-4f8b-83c1-0800eccfea7c\") " pod="kube-system/cilium-vjgsk" Sep 13 00:02:51.520281 kubelet[2590]: I0913 00:02:51.519970 2590 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a949daff-5dad-4f8b-83c1-0800eccfea7c-cilium-cgroup\") pod \"cilium-vjgsk\" (UID: \"a949daff-5dad-4f8b-83c1-0800eccfea7c\") " pod="kube-system/cilium-vjgsk" Sep 13 00:02:51.520281 kubelet[2590]: I0913 00:02:51.520003 2590 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a949daff-5dad-4f8b-83c1-0800eccfea7c-lib-modules\") pod \"cilium-vjgsk\" (UID: \"a949daff-5dad-4f8b-83c1-0800eccfea7c\") " pod="kube-system/cilium-vjgsk" Sep 13 00:02:51.520281 kubelet[2590]: I0913 00:02:51.520034 2590 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a949daff-5dad-4f8b-83c1-0800eccfea7c-cilium-run\") pod \"cilium-vjgsk\" (UID: \"a949daff-5dad-4f8b-83c1-0800eccfea7c\") " pod="kube-system/cilium-vjgsk" Sep 13 00:02:51.520281 kubelet[2590]: I0913 00:02:51.520066 2590 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a949daff-5dad-4f8b-83c1-0800eccfea7c-cni-path\") pod \"cilium-vjgsk\" (UID: \"a949daff-5dad-4f8b-83c1-0800eccfea7c\") " pod="kube-system/cilium-vjgsk" Sep 13 00:02:51.520281 kubelet[2590]: I0913 00:02:51.520097 2590 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a949daff-5dad-4f8b-83c1-0800eccfea7c-etc-cni-netd\") pod \"cilium-vjgsk\" (UID: \"a949daff-5dad-4f8b-83c1-0800eccfea7c\") " pod="kube-system/cilium-vjgsk" Sep 13 00:02:51.520506 kubelet[2590]: I0913 00:02:51.520140 2590 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a949daff-5dad-4f8b-83c1-0800eccfea7c-xtables-lock\") pod \"cilium-vjgsk\" (UID: \"a949daff-5dad-4f8b-83c1-0800eccfea7c\") " pod="kube-system/cilium-vjgsk" Sep 13 00:02:51.520506 kubelet[2590]: I0913 00:02:51.520208 2590 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a949daff-5dad-4f8b-83c1-0800eccfea7c-host-proc-sys-kernel\") pod \"cilium-vjgsk\" (UID: \"a949daff-5dad-4f8b-83c1-0800eccfea7c\") " pod="kube-system/cilium-vjgsk" Sep 13 00:02:51.520506 kubelet[2590]: I0913 00:02:51.520256 2590 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv4qx\" (UniqueName: \"kubernetes.io/projected/a949daff-5dad-4f8b-83c1-0800eccfea7c-kube-api-access-lv4qx\") pod \"cilium-vjgsk\" (UID: \"a949daff-5dad-4f8b-83c1-0800eccfea7c\") " pod="kube-system/cilium-vjgsk" Sep 13 00:02:51.740581 containerd[1478]: time="2025-09-13T00:02:51.740435993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dmwqc,Uid:2d5bd958-d5eb-46f8-b014-30ec2d5a75da,Namespace:kube-system,Attempt:0,}" Sep 13 00:02:51.776530 systemd[1]: Created slice kubepods-besteffort-pod27180f33_be9f_4033_84fc_3b6ad1ee0241.slice - libcontainer container kubepods-besteffort-pod27180f33_be9f_4033_84fc_3b6ad1ee0241.slice. Sep 13 00:02:51.794289 containerd[1478]: time="2025-09-13T00:02:51.793999294Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:02:51.794289 containerd[1478]: time="2025-09-13T00:02:51.794062134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:02:51.794289 containerd[1478]: time="2025-09-13T00:02:51.794074134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:02:51.794910 containerd[1478]: time="2025-09-13T00:02:51.794171614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:02:51.823561 kubelet[2590]: I0913 00:02:51.821885 2590 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqhh6\" (UniqueName: \"kubernetes.io/projected/27180f33-be9f-4033-84fc-3b6ad1ee0241-kube-api-access-bqhh6\") pod \"cilium-operator-5d85765b45-hl248\" (UID: \"27180f33-be9f-4033-84fc-3b6ad1ee0241\") " pod="kube-system/cilium-operator-5d85765b45-hl248" Sep 13 00:02:51.823561 kubelet[2590]: I0913 00:02:51.821956 2590 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/27180f33-be9f-4033-84fc-3b6ad1ee0241-cilium-config-path\") pod \"cilium-operator-5d85765b45-hl248\" (UID: \"27180f33-be9f-4033-84fc-3b6ad1ee0241\") " pod="kube-system/cilium-operator-5d85765b45-hl248" Sep 13 00:02:51.827015 systemd[1]: Started cri-containerd-cfab5b8e738b46414f361ea4f3199b683b315dc643ee7b253e9ae0c951c95c28.scope - libcontainer container cfab5b8e738b46414f361ea4f3199b683b315dc643ee7b253e9ae0c951c95c28. Sep 13 00:02:51.865198 containerd[1478]: time="2025-09-13T00:02:51.864683015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dmwqc,Uid:2d5bd958-d5eb-46f8-b014-30ec2d5a75da,Namespace:kube-system,Attempt:0,} returns sandbox id \"cfab5b8e738b46414f361ea4f3199b683b315dc643ee7b253e9ae0c951c95c28\"" Sep 13 00:02:51.869562 containerd[1478]: time="2025-09-13T00:02:51.869425301Z" level=info msg="CreateContainer within sandbox \"cfab5b8e738b46414f361ea4f3199b683b315dc643ee7b253e9ae0c951c95c28\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:02:51.885922 containerd[1478]: time="2025-09-13T00:02:51.885865560Z" level=info msg="CreateContainer within sandbox \"cfab5b8e738b46414f361ea4f3199b683b315dc643ee7b253e9ae0c951c95c28\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1dcfcd62ef2d9ac94160ef26f3b4f82401262c33b96a3377c8a4579c9939a069\"" Sep 13 00:02:51.887018 containerd[1478]: time="2025-09-13T00:02:51.886980281Z" level=info msg="StartContainer for \"1dcfcd62ef2d9ac94160ef26f3b4f82401262c33b96a3377c8a4579c9939a069\"" Sep 13 00:02:51.914765 systemd[1]: Started cri-containerd-1dcfcd62ef2d9ac94160ef26f3b4f82401262c33b96a3377c8a4579c9939a069.scope - libcontainer container 1dcfcd62ef2d9ac94160ef26f3b4f82401262c33b96a3377c8a4579c9939a069. Sep 13 00:02:51.954009 containerd[1478]: time="2025-09-13T00:02:51.953904598Z" level=info msg="StartContainer for \"1dcfcd62ef2d9ac94160ef26f3b4f82401262c33b96a3377c8a4579c9939a069\" returns successfully" Sep 13 00:02:52.622603 kubelet[2590]: E0913 00:02:52.622027 2590 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Sep 13 00:02:52.622603 kubelet[2590]: E0913 00:02:52.622201 2590 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a949daff-5dad-4f8b-83c1-0800eccfea7c-cilium-config-path podName:a949daff-5dad-4f8b-83c1-0800eccfea7c nodeName:}" failed. No retries permitted until 2025-09-13 00:02:53.12212136 +0000 UTC m=+6.664107130 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/a949daff-5dad-4f8b-83c1-0800eccfea7c-cilium-config-path") pod "cilium-vjgsk" (UID: "a949daff-5dad-4f8b-83c1-0800eccfea7c") : failed to sync configmap cache: timed out waiting for the condition Sep 13 00:02:52.658596 kubelet[2590]: I0913 00:02:52.657949 2590 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dmwqc" podStartSLOduration=1.657932599 podStartE2EDuration="1.657932599s" podCreationTimestamp="2025-09-13 00:02:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:02:52.657748719 +0000 UTC m=+6.199734449" watchObservedRunningTime="2025-09-13 00:02:52.657932599 +0000 UTC m=+6.199918329" Sep 13 00:02:52.682207 containerd[1478]: time="2025-09-13T00:02:52.681609624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-hl248,Uid:27180f33-be9f-4033-84fc-3b6ad1ee0241,Namespace:kube-system,Attempt:0,}" Sep 13 00:02:52.708004 containerd[1478]: time="2025-09-13T00:02:52.707897333Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:02:52.708004 containerd[1478]: time="2025-09-13T00:02:52.707957773Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:02:52.708215 containerd[1478]: time="2025-09-13T00:02:52.707972933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:02:52.709318 containerd[1478]: time="2025-09-13T00:02:52.709167654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:02:52.729741 systemd[1]: Started cri-containerd-8e42e88f6b1df1318ffa5be2f9a6326c7e525bef6765c7cb2deee26f357edce3.scope - libcontainer container 8e42e88f6b1df1318ffa5be2f9a6326c7e525bef6765c7cb2deee26f357edce3. Sep 13 00:02:52.767461 containerd[1478]: time="2025-09-13T00:02:52.767410797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-hl248,Uid:27180f33-be9f-4033-84fc-3b6ad1ee0241,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e42e88f6b1df1318ffa5be2f9a6326c7e525bef6765c7cb2deee26f357edce3\"" Sep 13 00:02:52.770252 containerd[1478]: time="2025-09-13T00:02:52.769977440Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 13 00:02:53.248294 containerd[1478]: time="2025-09-13T00:02:53.248218938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vjgsk,Uid:a949daff-5dad-4f8b-83c1-0800eccfea7c,Namespace:kube-system,Attempt:0,}" Sep 13 00:02:53.274609 containerd[1478]: time="2025-09-13T00:02:53.274319164Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:02:53.274609 containerd[1478]: time="2025-09-13T00:02:53.274386484Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:02:53.274609 containerd[1478]: time="2025-09-13T00:02:53.274399484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:02:53.274609 containerd[1478]: time="2025-09-13T00:02:53.274498364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:02:53.299820 systemd[1]: Started cri-containerd-b3da729a12d13119d38fee13c310415fefd1acba371871edee943161b3a408c7.scope - libcontainer container b3da729a12d13119d38fee13c310415fefd1acba371871edee943161b3a408c7. Sep 13 00:02:53.325215 containerd[1478]: time="2025-09-13T00:02:53.325172855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vjgsk,Uid:a949daff-5dad-4f8b-83c1-0800eccfea7c,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3da729a12d13119d38fee13c310415fefd1acba371871edee943161b3a408c7\"" Sep 13 00:02:54.815817 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount235651498.mount: Deactivated successfully. Sep 13 00:02:55.147134 containerd[1478]: time="2025-09-13T00:02:55.147053053Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:55.149469 containerd[1478]: time="2025-09-13T00:02:55.149402175Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 13 00:02:55.151592 containerd[1478]: time="2025-09-13T00:02:55.150599096Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:55.155609 containerd[1478]: time="2025-09-13T00:02:55.155536021Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.385514221s" Sep 13 00:02:55.155609 containerd[1478]: time="2025-09-13T00:02:55.155611461Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 13 00:02:55.156730 containerd[1478]: time="2025-09-13T00:02:55.156697902Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 13 00:02:55.159337 containerd[1478]: time="2025-09-13T00:02:55.159304944Z" level=info msg="CreateContainer within sandbox \"8e42e88f6b1df1318ffa5be2f9a6326c7e525bef6765c7cb2deee26f357edce3\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 13 00:02:55.175231 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3898425344.mount: Deactivated successfully. Sep 13 00:02:55.176888 containerd[1478]: time="2025-09-13T00:02:55.176829160Z" level=info msg="CreateContainer within sandbox \"8e42e88f6b1df1318ffa5be2f9a6326c7e525bef6765c7cb2deee26f357edce3\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"55e580bed9ae8c2764c9059a5f2cf07fa2b05aacfd5879170b5855c03702792b\"" Sep 13 00:02:55.178654 containerd[1478]: time="2025-09-13T00:02:55.177802561Z" level=info msg="StartContainer for \"55e580bed9ae8c2764c9059a5f2cf07fa2b05aacfd5879170b5855c03702792b\"" Sep 13 00:02:55.210916 systemd[1]: Started cri-containerd-55e580bed9ae8c2764c9059a5f2cf07fa2b05aacfd5879170b5855c03702792b.scope - libcontainer container 55e580bed9ae8c2764c9059a5f2cf07fa2b05aacfd5879170b5855c03702792b. Sep 13 00:02:55.242117 containerd[1478]: time="2025-09-13T00:02:55.241994458Z" level=info msg="StartContainer for \"55e580bed9ae8c2764c9059a5f2cf07fa2b05aacfd5879170b5855c03702792b\" returns successfully" Sep 13 00:02:55.686577 kubelet[2590]: I0913 00:02:55.686474 2590 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-hl248" podStartSLOduration=2.29962543 podStartE2EDuration="4.686457732s" podCreationTimestamp="2025-09-13 00:02:51 +0000 UTC" firstStartedPulling="2025-09-13 00:02:52.769407959 +0000 UTC m=+6.311393689" lastFinishedPulling="2025-09-13 00:02:55.156240261 +0000 UTC m=+8.698225991" observedRunningTime="2025-09-13 00:02:55.685535091 +0000 UTC m=+9.227520821" watchObservedRunningTime="2025-09-13 00:02:55.686457732 +0000 UTC m=+9.228443422" Sep 13 00:02:59.762417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount584159788.mount: Deactivated successfully. Sep 13 00:03:01.260918 containerd[1478]: time="2025-09-13T00:03:01.259496396Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:01.260918 containerd[1478]: time="2025-09-13T00:03:01.260860397Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 13 00:03:01.261983 containerd[1478]: time="2025-09-13T00:03:01.261929798Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:01.265751 containerd[1478]: time="2025-09-13T00:03:01.264776959Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.108041817s" Sep 13 00:03:01.265751 containerd[1478]: time="2025-09-13T00:03:01.264827519Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 13 00:03:01.267995 containerd[1478]: time="2025-09-13T00:03:01.267482801Z" level=info msg="CreateContainer within sandbox \"b3da729a12d13119d38fee13c310415fefd1acba371871edee943161b3a408c7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:03:01.293112 containerd[1478]: time="2025-09-13T00:03:01.293040976Z" level=info msg="CreateContainer within sandbox \"b3da729a12d13119d38fee13c310415fefd1acba371871edee943161b3a408c7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"be7c605ac21187ce18ea1226bbc9f12c34368e298412d34702ab2d7de8262abc\"" Sep 13 00:03:01.294003 containerd[1478]: time="2025-09-13T00:03:01.293962257Z" level=info msg="StartContainer for \"be7c605ac21187ce18ea1226bbc9f12c34368e298412d34702ab2d7de8262abc\"" Sep 13 00:03:01.325815 systemd[1]: Started cri-containerd-be7c605ac21187ce18ea1226bbc9f12c34368e298412d34702ab2d7de8262abc.scope - libcontainer container be7c605ac21187ce18ea1226bbc9f12c34368e298412d34702ab2d7de8262abc. Sep 13 00:03:01.354141 containerd[1478]: time="2025-09-13T00:03:01.354081573Z" level=info msg="StartContainer for \"be7c605ac21187ce18ea1226bbc9f12c34368e298412d34702ab2d7de8262abc\" returns successfully" Sep 13 00:03:01.369728 systemd[1]: cri-containerd-be7c605ac21187ce18ea1226bbc9f12c34368e298412d34702ab2d7de8262abc.scope: Deactivated successfully. Sep 13 00:03:01.562837 containerd[1478]: time="2025-09-13T00:03:01.562590579Z" level=info msg="shim disconnected" id=be7c605ac21187ce18ea1226bbc9f12c34368e298412d34702ab2d7de8262abc namespace=k8s.io Sep 13 00:03:01.562837 containerd[1478]: time="2025-09-13T00:03:01.562723379Z" level=warning msg="cleaning up after shim disconnected" id=be7c605ac21187ce18ea1226bbc9f12c34368e298412d34702ab2d7de8262abc namespace=k8s.io Sep 13 00:03:01.562837 containerd[1478]: time="2025-09-13T00:03:01.562737459Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:03:01.678278 containerd[1478]: time="2025-09-13T00:03:01.678234768Z" level=info msg="CreateContainer within sandbox \"b3da729a12d13119d38fee13c310415fefd1acba371871edee943161b3a408c7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:03:01.695807 containerd[1478]: time="2025-09-13T00:03:01.695658899Z" level=info msg="CreateContainer within sandbox \"b3da729a12d13119d38fee13c310415fefd1acba371871edee943161b3a408c7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4f933c5a5117d3cc219527c6c2a3abb5df99502c009ad5e688e21b49da5b22d4\"" Sep 13 00:03:01.696461 containerd[1478]: time="2025-09-13T00:03:01.696359179Z" level=info msg="StartContainer for \"4f933c5a5117d3cc219527c6c2a3abb5df99502c009ad5e688e21b49da5b22d4\"" Sep 13 00:03:01.725034 systemd[1]: Started cri-containerd-4f933c5a5117d3cc219527c6c2a3abb5df99502c009ad5e688e21b49da5b22d4.scope - libcontainer container 4f933c5a5117d3cc219527c6c2a3abb5df99502c009ad5e688e21b49da5b22d4. Sep 13 00:03:01.754918 containerd[1478]: time="2025-09-13T00:03:01.754836014Z" level=info msg="StartContainer for \"4f933c5a5117d3cc219527c6c2a3abb5df99502c009ad5e688e21b49da5b22d4\" returns successfully" Sep 13 00:03:01.767176 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:03:01.767877 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:03:01.767957 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:03:01.774206 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:03:01.774507 systemd[1]: cri-containerd-4f933c5a5117d3cc219527c6c2a3abb5df99502c009ad5e688e21b49da5b22d4.scope: Deactivated successfully. Sep 13 00:03:01.806258 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:03:01.808089 containerd[1478]: time="2025-09-13T00:03:01.807835526Z" level=info msg="shim disconnected" id=4f933c5a5117d3cc219527c6c2a3abb5df99502c009ad5e688e21b49da5b22d4 namespace=k8s.io Sep 13 00:03:01.808089 containerd[1478]: time="2025-09-13T00:03:01.807892246Z" level=warning msg="cleaning up after shim disconnected" id=4f933c5a5117d3cc219527c6c2a3abb5df99502c009ad5e688e21b49da5b22d4 namespace=k8s.io Sep 13 00:03:01.808089 containerd[1478]: time="2025-09-13T00:03:01.807903606Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:03:02.281396 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be7c605ac21187ce18ea1226bbc9f12c34368e298412d34702ab2d7de8262abc-rootfs.mount: Deactivated successfully. Sep 13 00:03:02.682986 containerd[1478]: time="2025-09-13T00:03:02.682752788Z" level=info msg="CreateContainer within sandbox \"b3da729a12d13119d38fee13c310415fefd1acba371871edee943161b3a408c7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:03:02.706743 containerd[1478]: time="2025-09-13T00:03:02.706585081Z" level=info msg="CreateContainer within sandbox \"b3da729a12d13119d38fee13c310415fefd1acba371871edee943161b3a408c7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c62e0b4dc10d8ada86f8c1dd5bc3453b42934598dcdbcbb4268fcb7403bb340d\"" Sep 13 00:03:02.709386 containerd[1478]: time="2025-09-13T00:03:02.709258363Z" level=info msg="StartContainer for \"c62e0b4dc10d8ada86f8c1dd5bc3453b42934598dcdbcbb4268fcb7403bb340d\"" Sep 13 00:03:02.742874 systemd[1]: Started cri-containerd-c62e0b4dc10d8ada86f8c1dd5bc3453b42934598dcdbcbb4268fcb7403bb340d.scope - libcontainer container c62e0b4dc10d8ada86f8c1dd5bc3453b42934598dcdbcbb4268fcb7403bb340d. Sep 13 00:03:02.774265 containerd[1478]: time="2025-09-13T00:03:02.774188119Z" level=info msg="StartContainer for \"c62e0b4dc10d8ada86f8c1dd5bc3453b42934598dcdbcbb4268fcb7403bb340d\" returns successfully" Sep 13 00:03:02.779871 systemd[1]: cri-containerd-c62e0b4dc10d8ada86f8c1dd5bc3453b42934598dcdbcbb4268fcb7403bb340d.scope: Deactivated successfully. Sep 13 00:03:02.816249 containerd[1478]: time="2025-09-13T00:03:02.816040863Z" level=info msg="shim disconnected" id=c62e0b4dc10d8ada86f8c1dd5bc3453b42934598dcdbcbb4268fcb7403bb340d namespace=k8s.io Sep 13 00:03:02.816249 containerd[1478]: time="2025-09-13T00:03:02.816150463Z" level=warning msg="cleaning up after shim disconnected" id=c62e0b4dc10d8ada86f8c1dd5bc3453b42934598dcdbcbb4268fcb7403bb340d namespace=k8s.io Sep 13 00:03:02.816249 containerd[1478]: time="2025-09-13T00:03:02.816161983Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:03:03.280833 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c62e0b4dc10d8ada86f8c1dd5bc3453b42934598dcdbcbb4268fcb7403bb340d-rootfs.mount: Deactivated successfully. Sep 13 00:03:03.687649 containerd[1478]: time="2025-09-13T00:03:03.687584611Z" level=info msg="CreateContainer within sandbox \"b3da729a12d13119d38fee13c310415fefd1acba371871edee943161b3a408c7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:03:03.708597 containerd[1478]: time="2025-09-13T00:03:03.706967221Z" level=info msg="CreateContainer within sandbox \"b3da729a12d13119d38fee13c310415fefd1acba371871edee943161b3a408c7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f80f5b4d649e409bd10793da5881fcc152080facd8ff98a5f090542c1293db66\"" Sep 13 00:03:03.710074 containerd[1478]: time="2025-09-13T00:03:03.709919743Z" level=info msg="StartContainer for \"f80f5b4d649e409bd10793da5881fcc152080facd8ff98a5f090542c1293db66\"" Sep 13 00:03:03.769763 systemd[1]: Started cri-containerd-f80f5b4d649e409bd10793da5881fcc152080facd8ff98a5f090542c1293db66.scope - libcontainer container f80f5b4d649e409bd10793da5881fcc152080facd8ff98a5f090542c1293db66. Sep 13 00:03:03.811928 systemd[1]: cri-containerd-f80f5b4d649e409bd10793da5881fcc152080facd8ff98a5f090542c1293db66.scope: Deactivated successfully. Sep 13 00:03:03.816183 containerd[1478]: time="2025-09-13T00:03:03.815498998Z" level=info msg="StartContainer for \"f80f5b4d649e409bd10793da5881fcc152080facd8ff98a5f090542c1293db66\" returns successfully" Sep 13 00:03:03.839107 containerd[1478]: time="2025-09-13T00:03:03.839012931Z" level=info msg="shim disconnected" id=f80f5b4d649e409bd10793da5881fcc152080facd8ff98a5f090542c1293db66 namespace=k8s.io Sep 13 00:03:03.839521 containerd[1478]: time="2025-09-13T00:03:03.839346771Z" level=warning msg="cleaning up after shim disconnected" id=f80f5b4d649e409bd10793da5881fcc152080facd8ff98a5f090542c1293db66 namespace=k8s.io Sep 13 00:03:03.839521 containerd[1478]: time="2025-09-13T00:03:03.839365011Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:03:04.280416 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f80f5b4d649e409bd10793da5881fcc152080facd8ff98a5f090542c1293db66-rootfs.mount: Deactivated successfully. Sep 13 00:03:04.696844 containerd[1478]: time="2025-09-13T00:03:04.696785712Z" level=info msg="CreateContainer within sandbox \"b3da729a12d13119d38fee13c310415fefd1acba371871edee943161b3a408c7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:03:04.725606 containerd[1478]: time="2025-09-13T00:03:04.723769421Z" level=info msg="CreateContainer within sandbox \"b3da729a12d13119d38fee13c310415fefd1acba371871edee943161b3a408c7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1115113d729d38f1b49252c6cfcb162ee90599a47f5a5fa16f82698ba59865f3\"" Sep 13 00:03:04.725606 containerd[1478]: time="2025-09-13T00:03:04.724899424Z" level=info msg="StartContainer for \"1115113d729d38f1b49252c6cfcb162ee90599a47f5a5fa16f82698ba59865f3\"" Sep 13 00:03:04.755765 systemd[1]: Started cri-containerd-1115113d729d38f1b49252c6cfcb162ee90599a47f5a5fa16f82698ba59865f3.scope - libcontainer container 1115113d729d38f1b49252c6cfcb162ee90599a47f5a5fa16f82698ba59865f3. Sep 13 00:03:04.785581 containerd[1478]: time="2025-09-13T00:03:04.785384860Z" level=info msg="StartContainer for \"1115113d729d38f1b49252c6cfcb162ee90599a47f5a5fa16f82698ba59865f3\" returns successfully" Sep 13 00:03:04.863214 kubelet[2590]: I0913 00:03:04.860322 2590 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 13 00:03:04.923733 systemd[1]: Created slice kubepods-burstable-pod7de4f01f_3794_41cc_99dc_7d51955ddcc8.slice - libcontainer container kubepods-burstable-pod7de4f01f_3794_41cc_99dc_7d51955ddcc8.slice. Sep 13 00:03:04.933738 systemd[1]: Created slice kubepods-burstable-pod231a25e0_ad37_43e9_9703_b8494d7d4781.slice - libcontainer container kubepods-burstable-pod231a25e0_ad37_43e9_9703_b8494d7d4781.slice. Sep 13 00:03:05.008521 kubelet[2590]: I0913 00:03:05.008373 2590 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r5wr\" (UniqueName: \"kubernetes.io/projected/231a25e0-ad37-43e9-9703-b8494d7d4781-kube-api-access-7r5wr\") pod \"coredns-7c65d6cfc9-vdzvp\" (UID: \"231a25e0-ad37-43e9-9703-b8494d7d4781\") " pod="kube-system/coredns-7c65d6cfc9-vdzvp" Sep 13 00:03:05.009319 kubelet[2590]: I0913 00:03:05.009159 2590 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ffzq\" (UniqueName: \"kubernetes.io/projected/7de4f01f-3794-41cc-99dc-7d51955ddcc8-kube-api-access-6ffzq\") pod \"coredns-7c65d6cfc9-5hk26\" (UID: \"7de4f01f-3794-41cc-99dc-7d51955ddcc8\") " pod="kube-system/coredns-7c65d6cfc9-5hk26" Sep 13 00:03:05.009319 kubelet[2590]: I0913 00:03:05.009226 2590 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/231a25e0-ad37-43e9-9703-b8494d7d4781-config-volume\") pod \"coredns-7c65d6cfc9-vdzvp\" (UID: \"231a25e0-ad37-43e9-9703-b8494d7d4781\") " pod="kube-system/coredns-7c65d6cfc9-vdzvp" Sep 13 00:03:05.009319 kubelet[2590]: I0913 00:03:05.009255 2590 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7de4f01f-3794-41cc-99dc-7d51955ddcc8-config-volume\") pod \"coredns-7c65d6cfc9-5hk26\" (UID: \"7de4f01f-3794-41cc-99dc-7d51955ddcc8\") " pod="kube-system/coredns-7c65d6cfc9-5hk26" Sep 13 00:03:05.233012 containerd[1478]: time="2025-09-13T00:03:05.232565300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5hk26,Uid:7de4f01f-3794-41cc-99dc-7d51955ddcc8,Namespace:kube-system,Attempt:0,}" Sep 13 00:03:05.239576 containerd[1478]: time="2025-09-13T00:03:05.239450469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vdzvp,Uid:231a25e0-ad37-43e9-9703-b8494d7d4781,Namespace:kube-system,Attempt:0,}" Sep 13 00:03:06.990235 systemd-networkd[1377]: cilium_host: Link UP Sep 13 00:03:06.991831 systemd-networkd[1377]: cilium_net: Link UP Sep 13 00:03:06.994312 systemd-networkd[1377]: cilium_net: Gained carrier Sep 13 00:03:06.996793 systemd-networkd[1377]: cilium_host: Gained carrier Sep 13 00:03:07.123865 systemd-networkd[1377]: cilium_vxlan: Link UP Sep 13 00:03:07.123873 systemd-networkd[1377]: cilium_vxlan: Gained carrier Sep 13 00:03:07.380759 systemd-networkd[1377]: cilium_net: Gained IPv6LL Sep 13 00:03:07.411691 systemd-networkd[1377]: cilium_host: Gained IPv6LL Sep 13 00:03:07.425624 kernel: NET: Registered PF_ALG protocol family Sep 13 00:03:08.176779 systemd-networkd[1377]: lxc_health: Link UP Sep 13 00:03:08.180211 systemd-networkd[1377]: lxc_health: Gained carrier Sep 13 00:03:08.330178 systemd-networkd[1377]: lxc72dcfe25ecf3: Link UP Sep 13 00:03:08.335512 systemd-networkd[1377]: lxcff74abb35e79: Link UP Sep 13 00:03:08.340600 kernel: eth0: renamed from tmp2af3b Sep 13 00:03:08.345037 kernel: eth0: renamed from tmp4a294 Sep 13 00:03:08.348612 systemd-networkd[1377]: lxc72dcfe25ecf3: Gained carrier Sep 13 00:03:08.353575 systemd-networkd[1377]: lxcff74abb35e79: Gained carrier Sep 13 00:03:08.366885 systemd-networkd[1377]: cilium_vxlan: Gained IPv6LL Sep 13 00:03:09.273863 kubelet[2590]: I0913 00:03:09.273786 2590 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vjgsk" podStartSLOduration=10.334726201 podStartE2EDuration="18.273765984s" podCreationTimestamp="2025-09-13 00:02:51 +0000 UTC" firstStartedPulling="2025-09-13 00:02:53.326747457 +0000 UTC m=+6.868733187" lastFinishedPulling="2025-09-13 00:03:01.26578728 +0000 UTC m=+14.807772970" observedRunningTime="2025-09-13 00:03:05.722174304 +0000 UTC m=+19.264160114" watchObservedRunningTime="2025-09-13 00:03:09.273765984 +0000 UTC m=+22.815751714" Sep 13 00:03:09.901205 systemd-networkd[1377]: lxc_health: Gained IPv6LL Sep 13 00:03:09.901487 systemd-networkd[1377]: lxcff74abb35e79: Gained IPv6LL Sep 13 00:03:10.027787 systemd-networkd[1377]: lxc72dcfe25ecf3: Gained IPv6LL Sep 13 00:03:10.974996 kubelet[2590]: I0913 00:03:10.973779 2590 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:03:12.423887 containerd[1478]: time="2025-09-13T00:03:12.423744449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:03:12.423887 containerd[1478]: time="2025-09-13T00:03:12.423819970Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:03:12.423887 containerd[1478]: time="2025-09-13T00:03:12.423833210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:03:12.425066 containerd[1478]: time="2025-09-13T00:03:12.424889306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:03:12.446563 containerd[1478]: time="2025-09-13T00:03:12.446225995Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:03:12.446563 containerd[1478]: time="2025-09-13T00:03:12.446296836Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:03:12.446563 containerd[1478]: time="2025-09-13T00:03:12.446312796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:03:12.446563 containerd[1478]: time="2025-09-13T00:03:12.446399238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:03:12.476944 systemd[1]: Started cri-containerd-4a294589d52174f3454c510cd6e0d6c1399f90245e9476fc8a940ce17965bfe5.scope - libcontainer container 4a294589d52174f3454c510cd6e0d6c1399f90245e9476fc8a940ce17965bfe5. Sep 13 00:03:12.497820 systemd[1]: Started cri-containerd-2af3b5d8369a444fb445ab3cbb911dc3181bb3268324097dcb734bfdde9ff65c.scope - libcontainer container 2af3b5d8369a444fb445ab3cbb911dc3181bb3268324097dcb734bfdde9ff65c. Sep 13 00:03:12.561706 containerd[1478]: time="2025-09-13T00:03:12.561533571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5hk26,Uid:7de4f01f-3794-41cc-99dc-7d51955ddcc8,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a294589d52174f3454c510cd6e0d6c1399f90245e9476fc8a940ce17965bfe5\"" Sep 13 00:03:12.572500 containerd[1478]: time="2025-09-13T00:03:12.572231976Z" level=info msg="CreateContainer within sandbox \"4a294589d52174f3454c510cd6e0d6c1399f90245e9476fc8a940ce17965bfe5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:03:12.590630 containerd[1478]: time="2025-09-13T00:03:12.590574458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vdzvp,Uid:231a25e0-ad37-43e9-9703-b8494d7d4781,Namespace:kube-system,Attempt:0,} returns sandbox id \"2af3b5d8369a444fb445ab3cbb911dc3181bb3268324097dcb734bfdde9ff65c\"" Sep 13 00:03:12.593588 containerd[1478]: time="2025-09-13T00:03:12.593281060Z" level=info msg="CreateContainer within sandbox \"4a294589d52174f3454c510cd6e0d6c1399f90245e9476fc8a940ce17965bfe5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0c8f9277ffb42e01871b8427c670ed232631c3325ae83d3c9b5908291b480dad\"" Sep 13 00:03:12.596587 containerd[1478]: time="2025-09-13T00:03:12.595579055Z" level=info msg="StartContainer for \"0c8f9277ffb42e01871b8427c670ed232631c3325ae83d3c9b5908291b480dad\"" Sep 13 00:03:12.596726 containerd[1478]: time="2025-09-13T00:03:12.595587055Z" level=info msg="CreateContainer within sandbox \"2af3b5d8369a444fb445ab3cbb911dc3181bb3268324097dcb734bfdde9ff65c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:03:12.619271 containerd[1478]: time="2025-09-13T00:03:12.619211939Z" level=info msg="CreateContainer within sandbox \"2af3b5d8369a444fb445ab3cbb911dc3181bb3268324097dcb734bfdde9ff65c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5452bc3ce4df16c0eb850080574e16b8b2164d8d307fed8bcf789c46aca6a2cc\"" Sep 13 00:03:12.621614 containerd[1478]: time="2025-09-13T00:03:12.621572136Z" level=info msg="StartContainer for \"5452bc3ce4df16c0eb850080574e16b8b2164d8d307fed8bcf789c46aca6a2cc\"" Sep 13 00:03:12.639003 systemd[1]: Started cri-containerd-0c8f9277ffb42e01871b8427c670ed232631c3325ae83d3c9b5908291b480dad.scope - libcontainer container 0c8f9277ffb42e01871b8427c670ed232631c3325ae83d3c9b5908291b480dad. Sep 13 00:03:12.662838 systemd[1]: Started cri-containerd-5452bc3ce4df16c0eb850080574e16b8b2164d8d307fed8bcf789c46aca6a2cc.scope - libcontainer container 5452bc3ce4df16c0eb850080574e16b8b2164d8d307fed8bcf789c46aca6a2cc. Sep 13 00:03:12.686757 containerd[1478]: time="2025-09-13T00:03:12.685877566Z" level=info msg="StartContainer for \"0c8f9277ffb42e01871b8427c670ed232631c3325ae83d3c9b5908291b480dad\" returns successfully" Sep 13 00:03:12.708870 containerd[1478]: time="2025-09-13T00:03:12.708771959Z" level=info msg="StartContainer for \"5452bc3ce4df16c0eb850080574e16b8b2164d8d307fed8bcf789c46aca6a2cc\" returns successfully" Sep 13 00:03:12.772208 kubelet[2590]: I0913 00:03:12.772127 2590 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-5hk26" podStartSLOduration=21.772090094 podStartE2EDuration="21.772090094s" podCreationTimestamp="2025-09-13 00:02:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:03:12.742828283 +0000 UTC m=+26.284814013" watchObservedRunningTime="2025-09-13 00:03:12.772090094 +0000 UTC m=+26.314075824" Sep 13 00:03:13.749497 kubelet[2590]: I0913 00:03:13.749398 2590 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-vdzvp" podStartSLOduration=22.749367514 podStartE2EDuration="22.749367514s" podCreationTimestamp="2025-09-13 00:02:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:03:12.774995378 +0000 UTC m=+26.316981108" watchObservedRunningTime="2025-09-13 00:03:13.749367514 +0000 UTC m=+27.291353244" Sep 13 00:04:31.387750 update_engine[1458]: I20250913 00:04:31.386930 1458 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 13 00:04:31.387750 update_engine[1458]: I20250913 00:04:31.387123 1458 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 13 00:04:31.387750 update_engine[1458]: I20250913 00:04:31.387491 1458 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 13 00:04:31.388432 update_engine[1458]: I20250913 00:04:31.388314 1458 omaha_request_params.cc:62] Current group set to lts Sep 13 00:04:31.388491 update_engine[1458]: I20250913 00:04:31.388461 1458 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 13 00:04:31.388491 update_engine[1458]: I20250913 00:04:31.388478 1458 update_attempter.cc:643] Scheduling an action processor start. Sep 13 00:04:31.388587 update_engine[1458]: I20250913 00:04:31.388504 1458 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 13 00:04:31.388587 update_engine[1458]: I20250913 00:04:31.388576 1458 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 13 00:04:31.389074 update_engine[1458]: I20250913 00:04:31.388667 1458 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 13 00:04:31.389074 update_engine[1458]: I20250913 00:04:31.388691 1458 omaha_request_action.cc:272] Request: Sep 13 00:04:31.389074 update_engine[1458]: Sep 13 00:04:31.389074 update_engine[1458]: Sep 13 00:04:31.389074 update_engine[1458]: Sep 13 00:04:31.389074 update_engine[1458]: Sep 13 00:04:31.389074 update_engine[1458]: Sep 13 00:04:31.389074 update_engine[1458]: Sep 13 00:04:31.389074 update_engine[1458]: Sep 13 00:04:31.389074 update_engine[1458]: Sep 13 00:04:31.389074 update_engine[1458]: I20250913 00:04:31.388705 1458 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:04:31.389714 locksmithd[1504]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 13 00:04:31.390872 update_engine[1458]: I20250913 00:04:31.390424 1458 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:04:31.390996 update_engine[1458]: I20250913 00:04:31.390873 1458 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:04:31.393244 update_engine[1458]: E20250913 00:04:31.393172 1458 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:04:31.393344 update_engine[1458]: I20250913 00:04:31.393280 1458 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 13 00:04:41.326275 update_engine[1458]: I20250913 00:04:41.326153 1458 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:04:41.326995 update_engine[1458]: I20250913 00:04:41.326676 1458 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:04:41.327122 update_engine[1458]: I20250913 00:04:41.327049 1458 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:04:41.328099 update_engine[1458]: E20250913 00:04:41.327999 1458 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:04:41.328216 update_engine[1458]: I20250913 00:04:41.328105 1458 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 13 00:04:51.326339 update_engine[1458]: I20250913 00:04:51.326215 1458 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:04:51.326951 update_engine[1458]: I20250913 00:04:51.326632 1458 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:04:51.327035 update_engine[1458]: I20250913 00:04:51.326959 1458 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:04:51.328220 update_engine[1458]: E20250913 00:04:51.328157 1458 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:04:51.328302 update_engine[1458]: I20250913 00:04:51.328251 1458 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 13 00:05:01.323814 update_engine[1458]: I20250913 00:05:01.323642 1458 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:05:01.324435 update_engine[1458]: I20250913 00:05:01.324152 1458 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:05:01.324538 update_engine[1458]: I20250913 00:05:01.324491 1458 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:05:01.325363 update_engine[1458]: E20250913 00:05:01.325271 1458 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:05:01.325480 update_engine[1458]: I20250913 00:05:01.325376 1458 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 13 00:05:01.325480 update_engine[1458]: I20250913 00:05:01.325399 1458 omaha_request_action.cc:617] Omaha request response: Sep 13 00:05:01.325602 update_engine[1458]: E20250913 00:05:01.325520 1458 omaha_request_action.cc:636] Omaha request network transfer failed. Sep 13 00:05:01.325602 update_engine[1458]: I20250913 00:05:01.325585 1458 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 13 00:05:01.325662 update_engine[1458]: I20250913 00:05:01.325605 1458 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 13 00:05:01.325662 update_engine[1458]: I20250913 00:05:01.325618 1458 update_attempter.cc:306] Processing Done. Sep 13 00:05:01.325662 update_engine[1458]: E20250913 00:05:01.325642 1458 update_attempter.cc:619] Update failed. Sep 13 00:05:01.325662 update_engine[1458]: I20250913 00:05:01.325654 1458 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 13 00:05:01.325796 update_engine[1458]: I20250913 00:05:01.325665 1458 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 13 00:05:01.325796 update_engine[1458]: I20250913 00:05:01.325677 1458 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 13 00:05:01.325851 update_engine[1458]: I20250913 00:05:01.325819 1458 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 13 00:05:01.325883 update_engine[1458]: I20250913 00:05:01.325868 1458 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 13 00:05:01.325911 update_engine[1458]: I20250913 00:05:01.325881 1458 omaha_request_action.cc:272] Request: Sep 13 00:05:01.325911 update_engine[1458]: Sep 13 00:05:01.325911 update_engine[1458]: Sep 13 00:05:01.325911 update_engine[1458]: Sep 13 00:05:01.325911 update_engine[1458]: Sep 13 00:05:01.325911 update_engine[1458]: Sep 13 00:05:01.325911 update_engine[1458]: Sep 13 00:05:01.325911 update_engine[1458]: I20250913 00:05:01.325894 1458 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:05:01.326260 update_engine[1458]: I20250913 00:05:01.326149 1458 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:05:01.326531 locksmithd[1504]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 13 00:05:01.326858 update_engine[1458]: I20250913 00:05:01.326509 1458 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:05:01.327349 update_engine[1458]: E20250913 00:05:01.327273 1458 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:05:01.327407 update_engine[1458]: I20250913 00:05:01.327374 1458 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 13 00:05:01.327407 update_engine[1458]: I20250913 00:05:01.327397 1458 omaha_request_action.cc:617] Omaha request response: Sep 13 00:05:01.327469 update_engine[1458]: I20250913 00:05:01.327411 1458 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 13 00:05:01.327469 update_engine[1458]: I20250913 00:05:01.327425 1458 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 13 00:05:01.327469 update_engine[1458]: I20250913 00:05:01.327436 1458 update_attempter.cc:306] Processing Done. Sep 13 00:05:01.327469 update_engine[1458]: I20250913 00:05:01.327450 1458 update_attempter.cc:310] Error event sent. Sep 13 00:05:01.327617 update_engine[1458]: I20250913 00:05:01.327468 1458 update_check_scheduler.cc:74] Next update check in 47m46s Sep 13 00:05:01.327913 locksmithd[1504]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 13 00:05:11.236412 systemd[1]: Started sshd@7-91.99.150.175:22-147.75.109.163:53298.service - OpenSSH per-connection server daemon (147.75.109.163:53298). Sep 13 00:05:12.229643 sshd[3980]: Accepted publickey for core from 147.75.109.163 port 53298 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:05:12.232434 sshd[3980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:05:12.238829 systemd-logind[1457]: New session 8 of user core. Sep 13 00:05:12.249972 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 00:05:13.015405 sshd[3980]: pam_unix(sshd:session): session closed for user core Sep 13 00:05:13.021774 systemd[1]: sshd@7-91.99.150.175:22-147.75.109.163:53298.service: Deactivated successfully. Sep 13 00:05:13.025334 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:05:13.026515 systemd-logind[1457]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:05:13.028320 systemd-logind[1457]: Removed session 8. Sep 13 00:05:18.191106 systemd[1]: Started sshd@8-91.99.150.175:22-147.75.109.163:53306.service - OpenSSH per-connection server daemon (147.75.109.163:53306). Sep 13 00:05:19.176076 sshd[3994]: Accepted publickey for core from 147.75.109.163 port 53306 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:05:19.178682 sshd[3994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:05:19.183552 systemd-logind[1457]: New session 9 of user core. Sep 13 00:05:19.196992 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 00:05:19.932002 sshd[3994]: pam_unix(sshd:session): session closed for user core Sep 13 00:05:19.937190 systemd[1]: sshd@8-91.99.150.175:22-147.75.109.163:53306.service: Deactivated successfully. Sep 13 00:05:19.940870 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:05:19.942286 systemd-logind[1457]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:05:19.943319 systemd-logind[1457]: Removed session 9. Sep 13 00:05:25.104350 systemd[1]: Started sshd@9-91.99.150.175:22-147.75.109.163:59340.service - OpenSSH per-connection server daemon (147.75.109.163:59340). Sep 13 00:05:26.091259 sshd[4010]: Accepted publickey for core from 147.75.109.163 port 59340 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:05:26.093571 sshd[4010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:05:26.099439 systemd-logind[1457]: New session 10 of user core. Sep 13 00:05:26.104023 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 00:05:26.847875 sshd[4010]: pam_unix(sshd:session): session closed for user core Sep 13 00:05:26.853759 systemd-logind[1457]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:05:26.854305 systemd[1]: sshd@9-91.99.150.175:22-147.75.109.163:59340.service: Deactivated successfully. Sep 13 00:05:26.856748 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:05:26.859760 systemd-logind[1457]: Removed session 10. Sep 13 00:05:27.026222 systemd[1]: Started sshd@10-91.99.150.175:22-147.75.109.163:59356.service - OpenSSH per-connection server daemon (147.75.109.163:59356). Sep 13 00:05:28.005483 sshd[4024]: Accepted publickey for core from 147.75.109.163 port 59356 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:05:28.007326 sshd[4024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:05:28.012287 systemd-logind[1457]: New session 11 of user core. Sep 13 00:05:28.020114 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 00:05:28.815057 sshd[4024]: pam_unix(sshd:session): session closed for user core Sep 13 00:05:28.819283 systemd[1]: sshd@10-91.99.150.175:22-147.75.109.163:59356.service: Deactivated successfully. Sep 13 00:05:28.821360 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:05:28.823502 systemd-logind[1457]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:05:28.825356 systemd-logind[1457]: Removed session 11. Sep 13 00:05:29.002283 systemd[1]: Started sshd@11-91.99.150.175:22-147.75.109.163:59372.service - OpenSSH per-connection server daemon (147.75.109.163:59372). Sep 13 00:05:30.041899 sshd[4035]: Accepted publickey for core from 147.75.109.163 port 59372 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:05:30.043999 sshd[4035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:05:30.048727 systemd-logind[1457]: New session 12 of user core. Sep 13 00:05:30.054906 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 00:05:30.838908 sshd[4035]: pam_unix(sshd:session): session closed for user core Sep 13 00:05:30.844246 systemd[1]: sshd@11-91.99.150.175:22-147.75.109.163:59372.service: Deactivated successfully. Sep 13 00:05:30.846905 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:05:30.847693 systemd-logind[1457]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:05:30.848970 systemd-logind[1457]: Removed session 12. Sep 13 00:05:36.022111 systemd[1]: Started sshd@12-91.99.150.175:22-147.75.109.163:47804.service - OpenSSH per-connection server daemon (147.75.109.163:47804). Sep 13 00:05:37.018758 sshd[4048]: Accepted publickey for core from 147.75.109.163 port 47804 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:05:37.022275 sshd[4048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:05:37.033915 systemd-logind[1457]: New session 13 of user core. Sep 13 00:05:37.037840 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 00:05:37.807146 sshd[4048]: pam_unix(sshd:session): session closed for user core Sep 13 00:05:37.816467 systemd[1]: sshd@12-91.99.150.175:22-147.75.109.163:47804.service: Deactivated successfully. Sep 13 00:05:37.821439 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:05:37.822839 systemd-logind[1457]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:05:37.824125 systemd-logind[1457]: Removed session 13. Sep 13 00:05:42.993868 systemd[1]: Started sshd@13-91.99.150.175:22-147.75.109.163:48942.service - OpenSSH per-connection server daemon (147.75.109.163:48942). Sep 13 00:05:44.036529 sshd[4060]: Accepted publickey for core from 147.75.109.163 port 48942 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:05:44.039331 sshd[4060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:05:44.048681 systemd-logind[1457]: New session 14 of user core. Sep 13 00:05:44.053781 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 00:05:44.830379 sshd[4060]: pam_unix(sshd:session): session closed for user core Sep 13 00:05:44.836487 systemd[1]: sshd@13-91.99.150.175:22-147.75.109.163:48942.service: Deactivated successfully. Sep 13 00:05:44.839969 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:05:44.842439 systemd-logind[1457]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:05:44.844190 systemd-logind[1457]: Removed session 14. Sep 13 00:05:45.011096 systemd[1]: Started sshd@14-91.99.150.175:22-147.75.109.163:48954.service - OpenSSH per-connection server daemon (147.75.109.163:48954). Sep 13 00:05:45.994746 sshd[4072]: Accepted publickey for core from 147.75.109.163 port 48954 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:05:45.996827 sshd[4072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:05:46.004635 systemd-logind[1457]: New session 15 of user core. Sep 13 00:05:46.014855 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 00:05:46.817119 sshd[4072]: pam_unix(sshd:session): session closed for user core Sep 13 00:05:46.823473 systemd[1]: sshd@14-91.99.150.175:22-147.75.109.163:48954.service: Deactivated successfully. Sep 13 00:05:46.827397 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:05:46.830355 systemd-logind[1457]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:05:46.832759 systemd-logind[1457]: Removed session 15. Sep 13 00:05:46.989040 systemd[1]: Started sshd@15-91.99.150.175:22-147.75.109.163:48968.service - OpenSSH per-connection server daemon (147.75.109.163:48968). Sep 13 00:05:47.968505 sshd[4085]: Accepted publickey for core from 147.75.109.163 port 48968 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:05:47.973203 sshd[4085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:05:47.982449 systemd-logind[1457]: New session 16 of user core. Sep 13 00:05:47.990427 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 00:05:50.082743 sshd[4085]: pam_unix(sshd:session): session closed for user core Sep 13 00:05:50.096115 systemd[1]: sshd@15-91.99.150.175:22-147.75.109.163:48968.service: Deactivated successfully. Sep 13 00:05:50.098860 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:05:50.103899 systemd-logind[1457]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:05:50.111296 systemd[1]: Started sshd@16-91.99.150.175:22-185.156.73.233:46400.service - OpenSSH per-connection server daemon (185.156.73.233:46400). Sep 13 00:05:50.115306 systemd-logind[1457]: Removed session 16. Sep 13 00:05:50.261992 systemd[1]: Started sshd@17-91.99.150.175:22-147.75.109.163:36510.service - OpenSSH per-connection server daemon (147.75.109.163:36510). Sep 13 00:05:51.284079 sshd[4104]: Connection closed by authenticating user root 185.156.73.233 port 46400 [preauth] Sep 13 00:05:51.285735 sshd[4106]: Accepted publickey for core from 147.75.109.163 port 36510 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:05:51.287880 systemd[1]: sshd@16-91.99.150.175:22-185.156.73.233:46400.service: Deactivated successfully. Sep 13 00:05:51.291457 sshd[4106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:05:51.298935 systemd-logind[1457]: New session 17 of user core. Sep 13 00:05:51.304223 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 00:05:52.196701 sshd[4106]: pam_unix(sshd:session): session closed for user core Sep 13 00:05:52.201418 systemd[1]: sshd@17-91.99.150.175:22-147.75.109.163:36510.service: Deactivated successfully. Sep 13 00:05:52.205127 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:05:52.209128 systemd-logind[1457]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:05:52.211666 systemd-logind[1457]: Removed session 17. Sep 13 00:05:52.383169 systemd[1]: Started sshd@18-91.99.150.175:22-147.75.109.163:36524.service - OpenSSH per-connection server daemon (147.75.109.163:36524). Sep 13 00:05:53.378358 sshd[4123]: Accepted publickey for core from 147.75.109.163 port 36524 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:05:53.377919 sshd[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:05:53.391069 systemd-logind[1457]: New session 18 of user core. Sep 13 00:05:53.402396 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 00:05:54.136498 sshd[4123]: pam_unix(sshd:session): session closed for user core Sep 13 00:05:54.143723 systemd[1]: sshd@18-91.99.150.175:22-147.75.109.163:36524.service: Deactivated successfully. Sep 13 00:05:54.146193 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:05:54.148507 systemd-logind[1457]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:05:54.151672 systemd-logind[1457]: Removed session 18. Sep 13 00:05:59.310877 systemd[1]: Started sshd@19-91.99.150.175:22-147.75.109.163:36538.service - OpenSSH per-connection server daemon (147.75.109.163:36538). Sep 13 00:06:00.307579 sshd[4139]: Accepted publickey for core from 147.75.109.163 port 36538 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:06:00.309891 sshd[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:00.318663 systemd-logind[1457]: New session 19 of user core. Sep 13 00:06:00.324041 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 00:06:01.070005 sshd[4139]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:01.077149 systemd[1]: sshd@19-91.99.150.175:22-147.75.109.163:36538.service: Deactivated successfully. Sep 13 00:06:01.082469 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:06:01.084669 systemd-logind[1457]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:06:01.086616 systemd-logind[1457]: Removed session 19. Sep 13 00:06:06.265696 systemd[1]: Started sshd@20-91.99.150.175:22-147.75.109.163:51704.service - OpenSSH per-connection server daemon (147.75.109.163:51704). Sep 13 00:06:07.281425 sshd[4152]: Accepted publickey for core from 147.75.109.163 port 51704 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:06:07.285047 sshd[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:07.295719 systemd-logind[1457]: New session 20 of user core. Sep 13 00:06:07.301918 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 00:06:08.079772 sshd[4152]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:08.086327 systemd[1]: sshd@20-91.99.150.175:22-147.75.109.163:51704.service: Deactivated successfully. Sep 13 00:06:08.089364 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:06:08.090506 systemd-logind[1457]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:06:08.093600 systemd-logind[1457]: Removed session 20. Sep 13 00:06:08.259090 systemd[1]: Started sshd@21-91.99.150.175:22-147.75.109.163:51708.service - OpenSSH per-connection server daemon (147.75.109.163:51708). Sep 13 00:06:09.247285 sshd[4165]: Accepted publickey for core from 147.75.109.163 port 51708 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:06:09.250028 sshd[4165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:09.256856 systemd-logind[1457]: New session 21 of user core. Sep 13 00:06:09.264864 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 00:06:11.473664 containerd[1478]: time="2025-09-13T00:06:11.472459858Z" level=info msg="StopContainer for \"55e580bed9ae8c2764c9059a5f2cf07fa2b05aacfd5879170b5855c03702792b\" with timeout 30 (s)" Sep 13 00:06:11.477694 containerd[1478]: time="2025-09-13T00:06:11.476306775Z" level=info msg="Stop container \"55e580bed9ae8c2764c9059a5f2cf07fa2b05aacfd5879170b5855c03702792b\" with signal terminated" Sep 13 00:06:11.484844 systemd[1]: run-containerd-runc-k8s.io-1115113d729d38f1b49252c6cfcb162ee90599a47f5a5fa16f82698ba59865f3-runc.KfkvQW.mount: Deactivated successfully. Sep 13 00:06:11.501231 systemd[1]: cri-containerd-55e580bed9ae8c2764c9059a5f2cf07fa2b05aacfd5879170b5855c03702792b.scope: Deactivated successfully. Sep 13 00:06:11.506656 containerd[1478]: time="2025-09-13T00:06:11.506466302Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:06:11.517307 containerd[1478]: time="2025-09-13T00:06:11.517195164Z" level=info msg="StopContainer for \"1115113d729d38f1b49252c6cfcb162ee90599a47f5a5fa16f82698ba59865f3\" with timeout 2 (s)" Sep 13 00:06:11.518197 containerd[1478]: time="2025-09-13T00:06:11.517789970Z" level=info msg="Stop container \"1115113d729d38f1b49252c6cfcb162ee90599a47f5a5fa16f82698ba59865f3\" with signal terminated" Sep 13 00:06:11.530413 systemd-networkd[1377]: lxc_health: Link DOWN Sep 13 00:06:11.530891 systemd-networkd[1377]: lxc_health: Lost carrier Sep 13 00:06:11.555156 systemd[1]: cri-containerd-1115113d729d38f1b49252c6cfcb162ee90599a47f5a5fa16f82698ba59865f3.scope: Deactivated successfully. Sep 13 00:06:11.557768 systemd[1]: cri-containerd-1115113d729d38f1b49252c6cfcb162ee90599a47f5a5fa16f82698ba59865f3.scope: Consumed 7.614s CPU time. Sep 13 00:06:11.563826 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55e580bed9ae8c2764c9059a5f2cf07fa2b05aacfd5879170b5855c03702792b-rootfs.mount: Deactivated successfully. Sep 13 00:06:11.571445 containerd[1478]: time="2025-09-13T00:06:11.570844035Z" level=info msg="shim disconnected" id=55e580bed9ae8c2764c9059a5f2cf07fa2b05aacfd5879170b5855c03702792b namespace=k8s.io Sep 13 00:06:11.571445 containerd[1478]: time="2025-09-13T00:06:11.571269519Z" level=warning msg="cleaning up after shim disconnected" id=55e580bed9ae8c2764c9059a5f2cf07fa2b05aacfd5879170b5855c03702792b namespace=k8s.io Sep 13 00:06:11.571445 containerd[1478]: time="2025-09-13T00:06:11.571283119Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:06:11.595643 containerd[1478]: time="2025-09-13T00:06:11.595532830Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:06:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 13 00:06:11.601492 containerd[1478]: time="2025-09-13T00:06:11.601386926Z" level=info msg="StopContainer for \"55e580bed9ae8c2764c9059a5f2cf07fa2b05aacfd5879170b5855c03702792b\" returns successfully" Sep 13 00:06:11.604876 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1115113d729d38f1b49252c6cfcb162ee90599a47f5a5fa16f82698ba59865f3-rootfs.mount: Deactivated successfully. Sep 13 00:06:11.610422 containerd[1478]: time="2025-09-13T00:06:11.610342731Z" level=info msg="StopPodSandbox for \"8e42e88f6b1df1318ffa5be2f9a6326c7e525bef6765c7cb2deee26f357edce3\"" Sep 13 00:06:11.610775 containerd[1478]: time="2025-09-13T00:06:11.610664694Z" level=info msg="Container to stop \"55e580bed9ae8c2764c9059a5f2cf07fa2b05aacfd5879170b5855c03702792b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:06:11.614435 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8e42e88f6b1df1318ffa5be2f9a6326c7e525bef6765c7cb2deee26f357edce3-shm.mount: Deactivated successfully. Sep 13 00:06:11.622341 containerd[1478]: time="2025-09-13T00:06:11.622087723Z" level=info msg="shim disconnected" id=1115113d729d38f1b49252c6cfcb162ee90599a47f5a5fa16f82698ba59865f3 namespace=k8s.io Sep 13 00:06:11.622341 containerd[1478]: time="2025-09-13T00:06:11.622154724Z" level=warning msg="cleaning up after shim disconnected" id=1115113d729d38f1b49252c6cfcb162ee90599a47f5a5fa16f82698ba59865f3 namespace=k8s.io Sep 13 00:06:11.622341 containerd[1478]: time="2025-09-13T00:06:11.622167324Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:06:11.628472 systemd[1]: cri-containerd-8e42e88f6b1df1318ffa5be2f9a6326c7e525bef6765c7cb2deee26f357edce3.scope: Deactivated successfully. Sep 13 00:06:11.651530 containerd[1478]: time="2025-09-13T00:06:11.651354642Z" level=info msg="StopContainer for \"1115113d729d38f1b49252c6cfcb162ee90599a47f5a5fa16f82698ba59865f3\" returns successfully" Sep 13 00:06:11.652429 containerd[1478]: time="2025-09-13T00:06:11.652030248Z" level=info msg="StopPodSandbox for \"b3da729a12d13119d38fee13c310415fefd1acba371871edee943161b3a408c7\"" Sep 13 00:06:11.652429 containerd[1478]: time="2025-09-13T00:06:11.652078849Z" level=info msg="Container to stop \"1115113d729d38f1b49252c6cfcb162ee90599a47f5a5fa16f82698ba59865f3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:06:11.652429 containerd[1478]: time="2025-09-13T00:06:11.652092609Z" level=info msg="Container to stop \"be7c605ac21187ce18ea1226bbc9f12c34368e298412d34702ab2d7de8262abc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:06:11.652429 containerd[1478]: time="2025-09-13T00:06:11.652103569Z" level=info msg="Container to stop \"4f933c5a5117d3cc219527c6c2a3abb5df99502c009ad5e688e21b49da5b22d4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:06:11.652429 containerd[1478]: time="2025-09-13T00:06:11.652113369Z" level=info msg="Container to stop \"c62e0b4dc10d8ada86f8c1dd5bc3453b42934598dcdbcbb4268fcb7403bb340d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:06:11.652429 containerd[1478]: time="2025-09-13T00:06:11.652122729Z" level=info msg="Container to stop \"f80f5b4d649e409bd10793da5881fcc152080facd8ff98a5f090542c1293db66\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:06:11.661187 systemd[1]: cri-containerd-b3da729a12d13119d38fee13c310415fefd1acba371871edee943161b3a408c7.scope: Deactivated successfully. Sep 13 00:06:11.693028 containerd[1478]: time="2025-09-13T00:06:11.692928878Z" level=info msg="shim disconnected" id=8e42e88f6b1df1318ffa5be2f9a6326c7e525bef6765c7cb2deee26f357edce3 namespace=k8s.io Sep 13 00:06:11.693028 containerd[1478]: time="2025-09-13T00:06:11.693019319Z" level=warning msg="cleaning up after shim disconnected" id=8e42e88f6b1df1318ffa5be2f9a6326c7e525bef6765c7cb2deee26f357edce3 namespace=k8s.io Sep 13 00:06:11.693028 containerd[1478]: time="2025-09-13T00:06:11.693029879Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:06:11.705596 containerd[1478]: time="2025-09-13T00:06:11.705504637Z" level=info msg="shim disconnected" id=b3da729a12d13119d38fee13c310415fefd1acba371871edee943161b3a408c7 namespace=k8s.io Sep 13 00:06:11.705892 containerd[1478]: time="2025-09-13T00:06:11.705866401Z" level=warning msg="cleaning up after shim disconnected" id=b3da729a12d13119d38fee13c310415fefd1acba371871edee943161b3a408c7 namespace=k8s.io Sep 13 00:06:11.706061 containerd[1478]: time="2025-09-13T00:06:11.706039523Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:06:11.714197 containerd[1478]: time="2025-09-13T00:06:11.714139760Z" level=info msg="TearDown network for sandbox \"8e42e88f6b1df1318ffa5be2f9a6326c7e525bef6765c7cb2deee26f357edce3\" successfully" Sep 13 00:06:11.714197 containerd[1478]: time="2025-09-13T00:06:11.714184160Z" level=info msg="StopPodSandbox for \"8e42e88f6b1df1318ffa5be2f9a6326c7e525bef6765c7cb2deee26f357edce3\" returns successfully" Sep 13 00:06:11.735560 kubelet[2590]: E0913 00:06:11.734395 2590 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:06:11.741794 containerd[1478]: time="2025-09-13T00:06:11.739418360Z" level=info msg="TearDown network for sandbox \"b3da729a12d13119d38fee13c310415fefd1acba371871edee943161b3a408c7\" successfully" Sep 13 00:06:11.741794 containerd[1478]: time="2025-09-13T00:06:11.739478561Z" level=info msg="StopPodSandbox for \"b3da729a12d13119d38fee13c310415fefd1acba371871edee943161b3a408c7\" returns successfully" Sep 13 00:06:11.786557 kubelet[2590]: I0913 00:06:11.786484 2590 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a949daff-5dad-4f8b-83c1-0800eccfea7c-cilium-run\") pod \"a949daff-5dad-4f8b-83c1-0800eccfea7c\" (UID: \"a949daff-5dad-4f8b-83c1-0800eccfea7c\") " Sep 13 00:06:11.786557 kubelet[2590]: I0913 00:06:11.786531 2590 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a949daff-5dad-4f8b-83c1-0800eccfea7c-cni-path\") pod \"a949daff-5dad-4f8b-83c1-0800eccfea7c\" (UID: \"a949daff-5dad-4f8b-83c1-0800eccfea7c\") " Sep 13 00:06:11.786557 kubelet[2590]: I0913 00:06:11.786563 2590 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a949daff-5dad-4f8b-83c1-0800eccfea7c-etc-cni-netd\") pod \"a949daff-5dad-4f8b-83c1-0800eccfea7c\" (UID: \"a949daff-5dad-4f8b-83c1-0800eccfea7c\") " Sep 13 00:06:11.786844 kubelet[2590]: I0913 00:06:11.786580 2590 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a949daff-5dad-4f8b-83c1-0800eccfea7c-lib-modules\") pod \"a949daff-5dad-4f8b-83c1-0800eccfea7c\" (UID: \"a949daff-5dad-4f8b-83c1-0800eccfea7c\") " Sep 13 00:06:11.786844 kubelet[2590]: I0913 00:06:11.786598 2590 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a949daff-5dad-4f8b-83c1-0800eccfea7c-host-proc-sys-kernel\") pod \"a949daff-5dad-4f8b-83c1-0800eccfea7c\" (UID: \"a949daff-5dad-4f8b-83c1-0800eccfea7c\") " Sep 13 00:06:11.786844 kubelet[2590]: I0913 00:06:11.786613 2590 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a949daff-5dad-4f8b-83c1-0800eccfea7c-host-proc-sys-net\") pod \"a949daff-5dad-4f8b-83c1-0800eccfea7c\" (UID: \"a949daff-5dad-4f8b-83c1-0800eccfea7c\") " Sep 13 00:06:11.786844 kubelet[2590]: I0913 00:06:11.786645 2590 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a949daff-5dad-4f8b-83c1-0800eccfea7c-clustermesh-secrets\") pod \"a949daff-5dad-4f8b-83c1-0800eccfea7c\" (UID: \"a949daff-5dad-4f8b-83c1-0800eccfea7c\") " Sep 13 00:06:11.786844 kubelet[2590]: I0913 00:06:11.786659 2590 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a949daff-5dad-4f8b-83c1-0800eccfea7c-hostproc\") pod \"a949daff-5dad-4f8b-83c1-0800eccfea7c\" (UID: \"a949daff-5dad-4f8b-83c1-0800eccfea7c\") " Sep 13 00:06:11.786844 kubelet[2590]: I0913 00:06:11.786712 2590 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a949daff-5dad-4f8b-83c1-0800eccfea7c-cilium-cgroup\") pod \"a949daff-5dad-4f8b-83c1-0800eccfea7c\" (UID: \"a949daff-5dad-4f8b-83c1-0800eccfea7c\") " Sep 13 00:06:11.787000 kubelet[2590]: I0913 00:06:11.786742 2590 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a949daff-5dad-4f8b-83c1-0800eccfea7c-cilium-config-path\") pod \"a949daff-5dad-4f8b-83c1-0800eccfea7c\" (UID: \"a949daff-5dad-4f8b-83c1-0800eccfea7c\") " Sep 13 00:06:11.787000 kubelet[2590]: I0913 00:06:11.786799 2590 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lv4qx\" (UniqueName: \"kubernetes.io/projected/a949daff-5dad-4f8b-83c1-0800eccfea7c-kube-api-access-lv4qx\") pod \"a949daff-5dad-4f8b-83c1-0800eccfea7c\" (UID: \"a949daff-5dad-4f8b-83c1-0800eccfea7c\") " Sep 13 00:06:11.787000 kubelet[2590]: I0913 00:06:11.786822 2590 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqhh6\" (UniqueName: \"kubernetes.io/projected/27180f33-be9f-4033-84fc-3b6ad1ee0241-kube-api-access-bqhh6\") pod \"27180f33-be9f-4033-84fc-3b6ad1ee0241\" (UID: \"27180f33-be9f-4033-84fc-3b6ad1ee0241\") " Sep 13 00:06:11.787000 kubelet[2590]: I0913 00:06:11.786841 2590 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a949daff-5dad-4f8b-83c1-0800eccfea7c-hubble-tls\") pod \"a949daff-5dad-4f8b-83c1-0800eccfea7c\" (UID: \"a949daff-5dad-4f8b-83c1-0800eccfea7c\") " Sep 13 00:06:11.787000 kubelet[2590]: I0913 00:06:11.786907 2590 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a949daff-5dad-4f8b-83c1-0800eccfea7c-xtables-lock\") pod \"a949daff-5dad-4f8b-83c1-0800eccfea7c\" (UID: \"a949daff-5dad-4f8b-83c1-0800eccfea7c\") " Sep 13 00:06:11.787000 kubelet[2590]: I0913 00:06:11.786932 2590 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a949daff-5dad-4f8b-83c1-0800eccfea7c-bpf-maps\") pod \"a949daff-5dad-4f8b-83c1-0800eccfea7c\" (UID: \"a949daff-5dad-4f8b-83c1-0800eccfea7c\") " Sep 13 00:06:11.787152 kubelet[2590]: I0913 00:06:11.786951 2590 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/27180f33-be9f-4033-84fc-3b6ad1ee0241-cilium-config-path\") pod \"27180f33-be9f-4033-84fc-3b6ad1ee0241\" (UID: \"27180f33-be9f-4033-84fc-3b6ad1ee0241\") " Sep 13 00:06:11.787152 kubelet[2590]: I0913 00:06:11.787120 2590 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a949daff-5dad-4f8b-83c1-0800eccfea7c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a949daff-5dad-4f8b-83c1-0800eccfea7c" (UID: "a949daff-5dad-4f8b-83c1-0800eccfea7c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:06:11.787202 kubelet[2590]: I0913 00:06:11.787163 2590 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a949daff-5dad-4f8b-83c1-0800eccfea7c-cni-path" (OuterVolumeSpecName: "cni-path") pod "a949daff-5dad-4f8b-83c1-0800eccfea7c" (UID: "a949daff-5dad-4f8b-83c1-0800eccfea7c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:06:11.787202 kubelet[2590]: I0913 00:06:11.787188 2590 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a949daff-5dad-4f8b-83c1-0800eccfea7c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a949daff-5dad-4f8b-83c1-0800eccfea7c" (UID: "a949daff-5dad-4f8b-83c1-0800eccfea7c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:06:11.787252 kubelet[2590]: I0913 00:06:11.787208 2590 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a949daff-5dad-4f8b-83c1-0800eccfea7c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a949daff-5dad-4f8b-83c1-0800eccfea7c" (UID: "a949daff-5dad-4f8b-83c1-0800eccfea7c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:06:11.787252 kubelet[2590]: I0913 00:06:11.787222 2590 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a949daff-5dad-4f8b-83c1-0800eccfea7c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a949daff-5dad-4f8b-83c1-0800eccfea7c" (UID: "a949daff-5dad-4f8b-83c1-0800eccfea7c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:06:11.787297 kubelet[2590]: I0913 00:06:11.787236 2590 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a949daff-5dad-4f8b-83c1-0800eccfea7c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a949daff-5dad-4f8b-83c1-0800eccfea7c" (UID: "a949daff-5dad-4f8b-83c1-0800eccfea7c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:06:11.792247 kubelet[2590]: I0913 00:06:11.789061 2590 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a949daff-5dad-4f8b-83c1-0800eccfea7c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a949daff-5dad-4f8b-83c1-0800eccfea7c" (UID: "a949daff-5dad-4f8b-83c1-0800eccfea7c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:06:11.792247 kubelet[2590]: I0913 00:06:11.790708 2590 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a949daff-5dad-4f8b-83c1-0800eccfea7c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a949daff-5dad-4f8b-83c1-0800eccfea7c" (UID: "a949daff-5dad-4f8b-83c1-0800eccfea7c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:06:11.794733 kubelet[2590]: I0913 00:06:11.794251 2590 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a949daff-5dad-4f8b-83c1-0800eccfea7c-hostproc" (OuterVolumeSpecName: "hostproc") pod "a949daff-5dad-4f8b-83c1-0800eccfea7c" (UID: "a949daff-5dad-4f8b-83c1-0800eccfea7c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:06:11.795255 kubelet[2590]: I0913 00:06:11.794355 2590 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a949daff-5dad-4f8b-83c1-0800eccfea7c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a949daff-5dad-4f8b-83c1-0800eccfea7c" (UID: "a949daff-5dad-4f8b-83c1-0800eccfea7c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:06:11.801836 kubelet[2590]: I0913 00:06:11.801785 2590 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a949daff-5dad-4f8b-83c1-0800eccfea7c-kube-api-access-lv4qx" (OuterVolumeSpecName: "kube-api-access-lv4qx") pod "a949daff-5dad-4f8b-83c1-0800eccfea7c" (UID: "a949daff-5dad-4f8b-83c1-0800eccfea7c"). InnerVolumeSpecName "kube-api-access-lv4qx". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:06:11.802820 kubelet[2590]: I0913 00:06:11.802177 2590 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27180f33-be9f-4033-84fc-3b6ad1ee0241-kube-api-access-bqhh6" (OuterVolumeSpecName: "kube-api-access-bqhh6") pod "27180f33-be9f-4033-84fc-3b6ad1ee0241" (UID: "27180f33-be9f-4033-84fc-3b6ad1ee0241"). InnerVolumeSpecName "kube-api-access-bqhh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:06:11.802820 kubelet[2590]: I0913 00:06:11.802502 2590 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27180f33-be9f-4033-84fc-3b6ad1ee0241-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "27180f33-be9f-4033-84fc-3b6ad1ee0241" (UID: "27180f33-be9f-4033-84fc-3b6ad1ee0241"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:06:11.803258 kubelet[2590]: I0913 00:06:11.803023 2590 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a949daff-5dad-4f8b-83c1-0800eccfea7c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a949daff-5dad-4f8b-83c1-0800eccfea7c" (UID: "a949daff-5dad-4f8b-83c1-0800eccfea7c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:06:11.803258 kubelet[2590]: I0913 00:06:11.803117 2590 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a949daff-5dad-4f8b-83c1-0800eccfea7c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a949daff-5dad-4f8b-83c1-0800eccfea7c" (UID: "a949daff-5dad-4f8b-83c1-0800eccfea7c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:06:11.803657 kubelet[2590]: I0913 00:06:11.803607 2590 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a949daff-5dad-4f8b-83c1-0800eccfea7c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a949daff-5dad-4f8b-83c1-0800eccfea7c" (UID: "a949daff-5dad-4f8b-83c1-0800eccfea7c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:06:11.887879 kubelet[2590]: I0913 00:06:11.887526 2590 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a949daff-5dad-4f8b-83c1-0800eccfea7c-clustermesh-secrets\") on node \"ci-4081-3-5-n-dc9d7711ed\" DevicePath \"\"" Sep 13 00:06:11.887879 kubelet[2590]: I0913 00:06:11.887623 2590 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a949daff-5dad-4f8b-83c1-0800eccfea7c-hostproc\") on node \"ci-4081-3-5-n-dc9d7711ed\" DevicePath \"\"" Sep 13 00:06:11.887879 kubelet[2590]: I0913 00:06:11.887643 2590 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a949daff-5dad-4f8b-83c1-0800eccfea7c-host-proc-sys-net\") on node \"ci-4081-3-5-n-dc9d7711ed\" DevicePath \"\"" Sep 13 00:06:11.887879 kubelet[2590]: I0913 00:06:11.887662 2590 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a949daff-5dad-4f8b-83c1-0800eccfea7c-cilium-cgroup\") on node \"ci-4081-3-5-n-dc9d7711ed\" DevicePath \"\"" Sep 13 00:06:11.887879 kubelet[2590]: I0913 00:06:11.887680 2590 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a949daff-5dad-4f8b-83c1-0800eccfea7c-cilium-config-path\") on node \"ci-4081-3-5-n-dc9d7711ed\" DevicePath \"\"" Sep 13 00:06:11.887879 kubelet[2590]: I0913 00:06:11.887696 2590 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lv4qx\" (UniqueName: \"kubernetes.io/projected/a949daff-5dad-4f8b-83c1-0800eccfea7c-kube-api-access-lv4qx\") on node \"ci-4081-3-5-n-dc9d7711ed\" DevicePath \"\"" Sep 13 00:06:11.887879 kubelet[2590]: I0913 00:06:11.887713 2590 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqhh6\" (UniqueName: \"kubernetes.io/projected/27180f33-be9f-4033-84fc-3b6ad1ee0241-kube-api-access-bqhh6\") on node \"ci-4081-3-5-n-dc9d7711ed\" DevicePath \"\"" Sep 13 00:06:11.887879 kubelet[2590]: I0913 00:06:11.887728 2590 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a949daff-5dad-4f8b-83c1-0800eccfea7c-hubble-tls\") on node \"ci-4081-3-5-n-dc9d7711ed\" DevicePath \"\"" Sep 13 00:06:11.888386 kubelet[2590]: I0913 00:06:11.887743 2590 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a949daff-5dad-4f8b-83c1-0800eccfea7c-xtables-lock\") on node \"ci-4081-3-5-n-dc9d7711ed\" DevicePath \"\"" Sep 13 00:06:11.888386 kubelet[2590]: I0913 00:06:11.887758 2590 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/27180f33-be9f-4033-84fc-3b6ad1ee0241-cilium-config-path\") on node \"ci-4081-3-5-n-dc9d7711ed\" DevicePath \"\"" Sep 13 00:06:11.888386 kubelet[2590]: I0913 00:06:11.887773 2590 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a949daff-5dad-4f8b-83c1-0800eccfea7c-bpf-maps\") on node \"ci-4081-3-5-n-dc9d7711ed\" DevicePath \"\"" Sep 13 00:06:11.888386 kubelet[2590]: I0913 00:06:11.887790 2590 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a949daff-5dad-4f8b-83c1-0800eccfea7c-cilium-run\") on node \"ci-4081-3-5-n-dc9d7711ed\" DevicePath \"\"" Sep 13 00:06:11.888386 kubelet[2590]: I0913 00:06:11.887805 2590 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a949daff-5dad-4f8b-83c1-0800eccfea7c-cni-path\") on node \"ci-4081-3-5-n-dc9d7711ed\" DevicePath \"\"" Sep 13 00:06:11.888386 kubelet[2590]: I0913 00:06:11.887821 2590 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a949daff-5dad-4f8b-83c1-0800eccfea7c-etc-cni-netd\") on node \"ci-4081-3-5-n-dc9d7711ed\" DevicePath \"\"" Sep 13 00:06:11.888386 kubelet[2590]: I0913 00:06:11.887835 2590 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a949daff-5dad-4f8b-83c1-0800eccfea7c-lib-modules\") on node \"ci-4081-3-5-n-dc9d7711ed\" DevicePath \"\"" Sep 13 00:06:11.888386 kubelet[2590]: I0913 00:06:11.887851 2590 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a949daff-5dad-4f8b-83c1-0800eccfea7c-host-proc-sys-kernel\") on node \"ci-4081-3-5-n-dc9d7711ed\" DevicePath \"\"" Sep 13 00:06:12.228307 kubelet[2590]: I0913 00:06:12.228179 2590 scope.go:117] "RemoveContainer" containerID="1115113d729d38f1b49252c6cfcb162ee90599a47f5a5fa16f82698ba59865f3" Sep 13 00:06:12.237118 containerd[1478]: time="2025-09-13T00:06:12.235497383Z" level=info msg="RemoveContainer for \"1115113d729d38f1b49252c6cfcb162ee90599a47f5a5fa16f82698ba59865f3\"" Sep 13 00:06:12.239733 systemd[1]: Removed slice kubepods-burstable-poda949daff_5dad_4f8b_83c1_0800eccfea7c.slice - libcontainer container kubepods-burstable-poda949daff_5dad_4f8b_83c1_0800eccfea7c.slice. Sep 13 00:06:12.239840 systemd[1]: kubepods-burstable-poda949daff_5dad_4f8b_83c1_0800eccfea7c.slice: Consumed 7.701s CPU time. Sep 13 00:06:12.248798 containerd[1478]: time="2025-09-13T00:06:12.248711268Z" level=info msg="RemoveContainer for \"1115113d729d38f1b49252c6cfcb162ee90599a47f5a5fa16f82698ba59865f3\" returns successfully" Sep 13 00:06:12.251615 kubelet[2590]: I0913 00:06:12.251523 2590 scope.go:117] "RemoveContainer" containerID="f80f5b4d649e409bd10793da5881fcc152080facd8ff98a5f090542c1293db66" Sep 13 00:06:12.253870 systemd[1]: Removed slice kubepods-besteffort-pod27180f33_be9f_4033_84fc_3b6ad1ee0241.slice - libcontainer container kubepods-besteffort-pod27180f33_be9f_4033_84fc_3b6ad1ee0241.slice. Sep 13 00:06:12.259151 containerd[1478]: time="2025-09-13T00:06:12.258989085Z" level=info msg="RemoveContainer for \"f80f5b4d649e409bd10793da5881fcc152080facd8ff98a5f090542c1293db66\"" Sep 13 00:06:12.265235 containerd[1478]: time="2025-09-13T00:06:12.265175063Z" level=info msg="RemoveContainer for \"f80f5b4d649e409bd10793da5881fcc152080facd8ff98a5f090542c1293db66\" returns successfully" Sep 13 00:06:12.266049 kubelet[2590]: I0913 00:06:12.265994 2590 scope.go:117] "RemoveContainer" containerID="c62e0b4dc10d8ada86f8c1dd5bc3453b42934598dcdbcbb4268fcb7403bb340d" Sep 13 00:06:12.268121 containerd[1478]: time="2025-09-13T00:06:12.267886729Z" level=info msg="RemoveContainer for \"c62e0b4dc10d8ada86f8c1dd5bc3453b42934598dcdbcbb4268fcb7403bb340d\"" Sep 13 00:06:12.273051 containerd[1478]: time="2025-09-13T00:06:12.272953177Z" level=info msg="RemoveContainer for \"c62e0b4dc10d8ada86f8c1dd5bc3453b42934598dcdbcbb4268fcb7403bb340d\" returns successfully" Sep 13 00:06:12.275870 kubelet[2590]: I0913 00:06:12.275806 2590 scope.go:117] "RemoveContainer" containerID="4f933c5a5117d3cc219527c6c2a3abb5df99502c009ad5e688e21b49da5b22d4" Sep 13 00:06:12.278913 containerd[1478]: time="2025-09-13T00:06:12.278864032Z" level=info msg="RemoveContainer for \"4f933c5a5117d3cc219527c6c2a3abb5df99502c009ad5e688e21b49da5b22d4\"" Sep 13 00:06:12.285612 containerd[1478]: time="2025-09-13T00:06:12.285500455Z" level=info msg="RemoveContainer for \"4f933c5a5117d3cc219527c6c2a3abb5df99502c009ad5e688e21b49da5b22d4\" returns successfully" Sep 13 00:06:12.286162 kubelet[2590]: I0913 00:06:12.285823 2590 scope.go:117] "RemoveContainer" containerID="be7c605ac21187ce18ea1226bbc9f12c34368e298412d34702ab2d7de8262abc" Sep 13 00:06:12.288205 containerd[1478]: time="2025-09-13T00:06:12.288157200Z" level=info msg="RemoveContainer for \"be7c605ac21187ce18ea1226bbc9f12c34368e298412d34702ab2d7de8262abc\"" Sep 13 00:06:12.293892 containerd[1478]: time="2025-09-13T00:06:12.293746213Z" level=info msg="RemoveContainer for \"be7c605ac21187ce18ea1226bbc9f12c34368e298412d34702ab2d7de8262abc\" returns successfully" Sep 13 00:06:12.294165 kubelet[2590]: I0913 00:06:12.294130 2590 scope.go:117] "RemoveContainer" containerID="1115113d729d38f1b49252c6cfcb162ee90599a47f5a5fa16f82698ba59865f3" Sep 13 00:06:12.294659 containerd[1478]: time="2025-09-13T00:06:12.294615741Z" level=error msg="ContainerStatus for \"1115113d729d38f1b49252c6cfcb162ee90599a47f5a5fa16f82698ba59865f3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1115113d729d38f1b49252c6cfcb162ee90599a47f5a5fa16f82698ba59865f3\": not found" Sep 13 00:06:12.294824 kubelet[2590]: E0913 00:06:12.294794 2590 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1115113d729d38f1b49252c6cfcb162ee90599a47f5a5fa16f82698ba59865f3\": not found" containerID="1115113d729d38f1b49252c6cfcb162ee90599a47f5a5fa16f82698ba59865f3" Sep 13 00:06:12.294939 kubelet[2590]: I0913 00:06:12.294835 2590 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1115113d729d38f1b49252c6cfcb162ee90599a47f5a5fa16f82698ba59865f3"} err="failed to get container status \"1115113d729d38f1b49252c6cfcb162ee90599a47f5a5fa16f82698ba59865f3\": rpc error: code = NotFound desc = an error occurred when try to find container \"1115113d729d38f1b49252c6cfcb162ee90599a47f5a5fa16f82698ba59865f3\": not found" Sep 13 00:06:12.294939 kubelet[2590]: I0913 00:06:12.294937 2590 scope.go:117] "RemoveContainer" containerID="f80f5b4d649e409bd10793da5881fcc152080facd8ff98a5f090542c1293db66" Sep 13 00:06:12.295258 containerd[1478]: time="2025-09-13T00:06:12.295219347Z" level=error msg="ContainerStatus for \"f80f5b4d649e409bd10793da5881fcc152080facd8ff98a5f090542c1293db66\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f80f5b4d649e409bd10793da5881fcc152080facd8ff98a5f090542c1293db66\": not found" Sep 13 00:06:12.295788 kubelet[2590]: E0913 00:06:12.295764 2590 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f80f5b4d649e409bd10793da5881fcc152080facd8ff98a5f090542c1293db66\": not found" containerID="f80f5b4d649e409bd10793da5881fcc152080facd8ff98a5f090542c1293db66" Sep 13 00:06:12.295861 kubelet[2590]: I0913 00:06:12.295799 2590 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f80f5b4d649e409bd10793da5881fcc152080facd8ff98a5f090542c1293db66"} err="failed to get container status \"f80f5b4d649e409bd10793da5881fcc152080facd8ff98a5f090542c1293db66\": rpc error: code = NotFound desc = an error occurred when try to find container \"f80f5b4d649e409bd10793da5881fcc152080facd8ff98a5f090542c1293db66\": not found" Sep 13 00:06:12.295861 kubelet[2590]: I0913 00:06:12.295826 2590 scope.go:117] "RemoveContainer" containerID="c62e0b4dc10d8ada86f8c1dd5bc3453b42934598dcdbcbb4268fcb7403bb340d" Sep 13 00:06:12.297141 containerd[1478]: time="2025-09-13T00:06:12.297077964Z" level=error msg="ContainerStatus for \"c62e0b4dc10d8ada86f8c1dd5bc3453b42934598dcdbcbb4268fcb7403bb340d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c62e0b4dc10d8ada86f8c1dd5bc3453b42934598dcdbcbb4268fcb7403bb340d\": not found" Sep 13 00:06:12.297353 kubelet[2590]: E0913 00:06:12.297323 2590 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c62e0b4dc10d8ada86f8c1dd5bc3453b42934598dcdbcbb4268fcb7403bb340d\": not found" containerID="c62e0b4dc10d8ada86f8c1dd5bc3453b42934598dcdbcbb4268fcb7403bb340d" Sep 13 00:06:12.297414 kubelet[2590]: I0913 00:06:12.297371 2590 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c62e0b4dc10d8ada86f8c1dd5bc3453b42934598dcdbcbb4268fcb7403bb340d"} err="failed to get container status \"c62e0b4dc10d8ada86f8c1dd5bc3453b42934598dcdbcbb4268fcb7403bb340d\": rpc error: code = NotFound desc = an error occurred when try to find container \"c62e0b4dc10d8ada86f8c1dd5bc3453b42934598dcdbcbb4268fcb7403bb340d\": not found" Sep 13 00:06:12.297414 kubelet[2590]: I0913 00:06:12.297399 2590 scope.go:117] "RemoveContainer" containerID="4f933c5a5117d3cc219527c6c2a3abb5df99502c009ad5e688e21b49da5b22d4" Sep 13 00:06:12.297849 containerd[1478]: time="2025-09-13T00:06:12.297810931Z" level=error msg="ContainerStatus for \"4f933c5a5117d3cc219527c6c2a3abb5df99502c009ad5e688e21b49da5b22d4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4f933c5a5117d3cc219527c6c2a3abb5df99502c009ad5e688e21b49da5b22d4\": not found" Sep 13 00:06:12.298715 kubelet[2590]: E0913 00:06:12.298674 2590 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4f933c5a5117d3cc219527c6c2a3abb5df99502c009ad5e688e21b49da5b22d4\": not found" containerID="4f933c5a5117d3cc219527c6c2a3abb5df99502c009ad5e688e21b49da5b22d4" Sep 13 00:06:12.298818 kubelet[2590]: I0913 00:06:12.298721 2590 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4f933c5a5117d3cc219527c6c2a3abb5df99502c009ad5e688e21b49da5b22d4"} err="failed to get container status \"4f933c5a5117d3cc219527c6c2a3abb5df99502c009ad5e688e21b49da5b22d4\": rpc error: code = NotFound desc = an error occurred when try to find container \"4f933c5a5117d3cc219527c6c2a3abb5df99502c009ad5e688e21b49da5b22d4\": not found" Sep 13 00:06:12.298818 kubelet[2590]: I0913 00:06:12.298750 2590 scope.go:117] "RemoveContainer" containerID="be7c605ac21187ce18ea1226bbc9f12c34368e298412d34702ab2d7de8262abc" Sep 13 00:06:12.300312 containerd[1478]: time="2025-09-13T00:06:12.300249554Z" level=error msg="ContainerStatus for \"be7c605ac21187ce18ea1226bbc9f12c34368e298412d34702ab2d7de8262abc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"be7c605ac21187ce18ea1226bbc9f12c34368e298412d34702ab2d7de8262abc\": not found" Sep 13 00:06:12.300580 kubelet[2590]: E0913 00:06:12.300513 2590 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"be7c605ac21187ce18ea1226bbc9f12c34368e298412d34702ab2d7de8262abc\": not found" containerID="be7c605ac21187ce18ea1226bbc9f12c34368e298412d34702ab2d7de8262abc" Sep 13 00:06:12.300580 kubelet[2590]: I0913 00:06:12.300565 2590 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"be7c605ac21187ce18ea1226bbc9f12c34368e298412d34702ab2d7de8262abc"} err="failed to get container status \"be7c605ac21187ce18ea1226bbc9f12c34368e298412d34702ab2d7de8262abc\": rpc error: code = NotFound desc = an error occurred when try to find container \"be7c605ac21187ce18ea1226bbc9f12c34368e298412d34702ab2d7de8262abc\": not found" Sep 13 00:06:12.300771 kubelet[2590]: I0913 00:06:12.300589 2590 scope.go:117] "RemoveContainer" containerID="55e580bed9ae8c2764c9059a5f2cf07fa2b05aacfd5879170b5855c03702792b" Sep 13 00:06:12.305071 containerd[1478]: time="2025-09-13T00:06:12.305025359Z" level=info msg="RemoveContainer for \"55e580bed9ae8c2764c9059a5f2cf07fa2b05aacfd5879170b5855c03702792b\"" Sep 13 00:06:12.308557 containerd[1478]: time="2025-09-13T00:06:12.308493832Z" level=info msg="RemoveContainer for \"55e580bed9ae8c2764c9059a5f2cf07fa2b05aacfd5879170b5855c03702792b\" returns successfully" Sep 13 00:06:12.308930 kubelet[2590]: I0913 00:06:12.308906 2590 scope.go:117] "RemoveContainer" containerID="55e580bed9ae8c2764c9059a5f2cf07fa2b05aacfd5879170b5855c03702792b" Sep 13 00:06:12.309468 containerd[1478]: time="2025-09-13T00:06:12.309409040Z" level=error msg="ContainerStatus for \"55e580bed9ae8c2764c9059a5f2cf07fa2b05aacfd5879170b5855c03702792b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"55e580bed9ae8c2764c9059a5f2cf07fa2b05aacfd5879170b5855c03702792b\": not found" Sep 13 00:06:12.309676 kubelet[2590]: E0913 00:06:12.309624 2590 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"55e580bed9ae8c2764c9059a5f2cf07fa2b05aacfd5879170b5855c03702792b\": not found" containerID="55e580bed9ae8c2764c9059a5f2cf07fa2b05aacfd5879170b5855c03702792b" Sep 13 00:06:12.309746 kubelet[2590]: I0913 00:06:12.309691 2590 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"55e580bed9ae8c2764c9059a5f2cf07fa2b05aacfd5879170b5855c03702792b"} err="failed to get container status \"55e580bed9ae8c2764c9059a5f2cf07fa2b05aacfd5879170b5855c03702792b\": rpc error: code = NotFound desc = an error occurred when try to find container \"55e580bed9ae8c2764c9059a5f2cf07fa2b05aacfd5879170b5855c03702792b\": not found" Sep 13 00:06:12.472173 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b3da729a12d13119d38fee13c310415fefd1acba371871edee943161b3a408c7-rootfs.mount: Deactivated successfully. Sep 13 00:06:12.472325 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b3da729a12d13119d38fee13c310415fefd1acba371871edee943161b3a408c7-shm.mount: Deactivated successfully. Sep 13 00:06:12.472404 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e42e88f6b1df1318ffa5be2f9a6326c7e525bef6765c7cb2deee26f357edce3-rootfs.mount: Deactivated successfully. Sep 13 00:06:12.472485 systemd[1]: var-lib-kubelet-pods-a949daff\x2d5dad\x2d4f8b\x2d83c1\x2d0800eccfea7c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:06:12.472584 systemd[1]: var-lib-kubelet-pods-a949daff\x2d5dad\x2d4f8b\x2d83c1\x2d0800eccfea7c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:06:12.472659 systemd[1]: var-lib-kubelet-pods-27180f33\x2dbe9f\x2d4033\x2d84fc\x2d3b6ad1ee0241-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbqhh6.mount: Deactivated successfully. Sep 13 00:06:12.472730 systemd[1]: var-lib-kubelet-pods-a949daff\x2d5dad\x2d4f8b\x2d83c1\x2d0800eccfea7c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlv4qx.mount: Deactivated successfully. Sep 13 00:06:12.615037 kubelet[2590]: I0913 00:06:12.613367 2590 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27180f33-be9f-4033-84fc-3b6ad1ee0241" path="/var/lib/kubelet/pods/27180f33-be9f-4033-84fc-3b6ad1ee0241/volumes" Sep 13 00:06:12.615037 kubelet[2590]: I0913 00:06:12.613878 2590 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a949daff-5dad-4f8b-83c1-0800eccfea7c" path="/var/lib/kubelet/pods/a949daff-5dad-4f8b-83c1-0800eccfea7c/volumes" Sep 13 00:06:13.546514 sshd[4165]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:13.556699 systemd[1]: sshd@21-91.99.150.175:22-147.75.109.163:51708.service: Deactivated successfully. Sep 13 00:06:13.561190 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:06:13.561512 systemd[1]: session-21.scope: Consumed 1.040s CPU time. Sep 13 00:06:13.564444 systemd-logind[1457]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:06:13.566899 systemd-logind[1457]: Removed session 21. Sep 13 00:06:13.733996 systemd[1]: Started sshd@22-91.99.150.175:22-147.75.109.163:39446.service - OpenSSH per-connection server daemon (147.75.109.163:39446). Sep 13 00:06:14.716463 sshd[4333]: Accepted publickey for core from 147.75.109.163 port 39446 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:06:14.720920 sshd[4333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:14.728833 systemd-logind[1457]: New session 22 of user core. Sep 13 00:06:14.738125 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 13 00:06:16.423334 kubelet[2590]: E0913 00:06:16.423271 2590 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a949daff-5dad-4f8b-83c1-0800eccfea7c" containerName="mount-bpf-fs" Sep 13 00:06:16.423334 kubelet[2590]: E0913 00:06:16.423315 2590 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a949daff-5dad-4f8b-83c1-0800eccfea7c" containerName="clean-cilium-state" Sep 13 00:06:16.423334 kubelet[2590]: E0913 00:06:16.423323 2590 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a949daff-5dad-4f8b-83c1-0800eccfea7c" containerName="cilium-agent" Sep 13 00:06:16.423334 kubelet[2590]: E0913 00:06:16.423332 2590 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a949daff-5dad-4f8b-83c1-0800eccfea7c" containerName="mount-cgroup" Sep 13 00:06:16.423334 kubelet[2590]: E0913 00:06:16.423337 2590 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a949daff-5dad-4f8b-83c1-0800eccfea7c" containerName="apply-sysctl-overwrites" Sep 13 00:06:16.423334 kubelet[2590]: E0913 00:06:16.423344 2590 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="27180f33-be9f-4033-84fc-3b6ad1ee0241" containerName="cilium-operator" Sep 13 00:06:16.423946 kubelet[2590]: I0913 00:06:16.423373 2590 memory_manager.go:354] "RemoveStaleState removing state" podUID="27180f33-be9f-4033-84fc-3b6ad1ee0241" containerName="cilium-operator" Sep 13 00:06:16.423946 kubelet[2590]: I0913 00:06:16.423379 2590 memory_manager.go:354] "RemoveStaleState removing state" podUID="a949daff-5dad-4f8b-83c1-0800eccfea7c" containerName="cilium-agent" Sep 13 00:06:16.434380 systemd[1]: Created slice kubepods-burstable-pod24a2a3e4_99dd_4456_88c7_6b6cf936aa12.slice - libcontainer container kubepods-burstable-pod24a2a3e4_99dd_4456_88c7_6b6cf936aa12.slice. Sep 13 00:06:16.453574 kubelet[2590]: W0913 00:06:16.453104 2590 reflector.go:561] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-4081-3-5-n-dc9d7711ed" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-5-n-dc9d7711ed' and this object Sep 13 00:06:16.453574 kubelet[2590]: E0913 00:06:16.453157 2590 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ci-4081-3-5-n-dc9d7711ed\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-5-n-dc9d7711ed' and this object" logger="UnhandledError" Sep 13 00:06:16.457125 kubelet[2590]: W0913 00:06:16.455860 2590 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4081-3-5-n-dc9d7711ed" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-5-n-dc9d7711ed' and this object Sep 13 00:06:16.457125 kubelet[2590]: W0913 00:06:16.455906 2590 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4081-3-5-n-dc9d7711ed" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-5-n-dc9d7711ed' and this object Sep 13 00:06:16.457125 kubelet[2590]: E0913 00:06:16.455916 2590 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4081-3-5-n-dc9d7711ed\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-5-n-dc9d7711ed' and this object" logger="UnhandledError" Sep 13 00:06:16.457125 kubelet[2590]: W0913 00:06:16.455860 2590 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4081-3-5-n-dc9d7711ed" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-5-n-dc9d7711ed' and this object Sep 13 00:06:16.457125 kubelet[2590]: E0913 00:06:16.455943 2590 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-4081-3-5-n-dc9d7711ed\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-5-n-dc9d7711ed' and this object" logger="UnhandledError" Sep 13 00:06:16.457398 kubelet[2590]: E0913 00:06:16.455936 2590 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-4081-3-5-n-dc9d7711ed\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-5-n-dc9d7711ed' and this object" logger="UnhandledError" Sep 13 00:06:16.505782 sshd[4333]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:16.513228 systemd[1]: sshd@22-91.99.150.175:22-147.75.109.163:39446.service: Deactivated successfully. Sep 13 00:06:16.524278 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:06:16.525221 systemd[1]: session-22.scope: Consumed 1.004s CPU time. Sep 13 00:06:16.527275 systemd-logind[1457]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:06:16.529080 systemd-logind[1457]: Removed session 22. Sep 13 00:06:16.619258 kubelet[2590]: I0913 00:06:16.619147 2590 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/24a2a3e4-99dd-4456-88c7-6b6cf936aa12-cilium-run\") pod \"cilium-lgwzj\" (UID: \"24a2a3e4-99dd-4456-88c7-6b6cf936aa12\") " pod="kube-system/cilium-lgwzj" Sep 13 00:06:16.620174 kubelet[2590]: I0913 00:06:16.619282 2590 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/24a2a3e4-99dd-4456-88c7-6b6cf936aa12-hubble-tls\") pod \"cilium-lgwzj\" (UID: \"24a2a3e4-99dd-4456-88c7-6b6cf936aa12\") " pod="kube-system/cilium-lgwzj" Sep 13 00:06:16.620174 kubelet[2590]: I0913 00:06:16.619366 2590 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/24a2a3e4-99dd-4456-88c7-6b6cf936aa12-cilium-ipsec-secrets\") pod \"cilium-lgwzj\" (UID: \"24a2a3e4-99dd-4456-88c7-6b6cf936aa12\") " pod="kube-system/cilium-lgwzj" Sep 13 00:06:16.620174 kubelet[2590]: I0913 00:06:16.619426 2590 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/24a2a3e4-99dd-4456-88c7-6b6cf936aa12-hostproc\") pod \"cilium-lgwzj\" (UID: \"24a2a3e4-99dd-4456-88c7-6b6cf936aa12\") " pod="kube-system/cilium-lgwzj" Sep 13 00:06:16.620174 kubelet[2590]: I0913 00:06:16.619461 2590 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/24a2a3e4-99dd-4456-88c7-6b6cf936aa12-xtables-lock\") pod \"cilium-lgwzj\" (UID: \"24a2a3e4-99dd-4456-88c7-6b6cf936aa12\") " pod="kube-system/cilium-lgwzj" Sep 13 00:06:16.620174 kubelet[2590]: I0913 00:06:16.619504 2590 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/24a2a3e4-99dd-4456-88c7-6b6cf936aa12-cilium-cgroup\") pod \"cilium-lgwzj\" (UID: \"24a2a3e4-99dd-4456-88c7-6b6cf936aa12\") " pod="kube-system/cilium-lgwzj" Sep 13 00:06:16.620174 kubelet[2590]: I0913 00:06:16.619565 2590 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/24a2a3e4-99dd-4456-88c7-6b6cf936aa12-cni-path\") pod \"cilium-lgwzj\" (UID: \"24a2a3e4-99dd-4456-88c7-6b6cf936aa12\") " pod="kube-system/cilium-lgwzj" Sep 13 00:06:16.620368 kubelet[2590]: I0913 00:06:16.619602 2590 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/24a2a3e4-99dd-4456-88c7-6b6cf936aa12-etc-cni-netd\") pod \"cilium-lgwzj\" (UID: \"24a2a3e4-99dd-4456-88c7-6b6cf936aa12\") " pod="kube-system/cilium-lgwzj" Sep 13 00:06:16.620368 kubelet[2590]: I0913 00:06:16.619635 2590 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24a2a3e4-99dd-4456-88c7-6b6cf936aa12-lib-modules\") pod \"cilium-lgwzj\" (UID: \"24a2a3e4-99dd-4456-88c7-6b6cf936aa12\") " pod="kube-system/cilium-lgwzj" Sep 13 00:06:16.620368 kubelet[2590]: I0913 00:06:16.619677 2590 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/24a2a3e4-99dd-4456-88c7-6b6cf936aa12-cilium-config-path\") pod \"cilium-lgwzj\" (UID: \"24a2a3e4-99dd-4456-88c7-6b6cf936aa12\") " pod="kube-system/cilium-lgwzj" Sep 13 00:06:16.620368 kubelet[2590]: I0913 00:06:16.619716 2590 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/24a2a3e4-99dd-4456-88c7-6b6cf936aa12-clustermesh-secrets\") pod \"cilium-lgwzj\" (UID: \"24a2a3e4-99dd-4456-88c7-6b6cf936aa12\") " pod="kube-system/cilium-lgwzj" Sep 13 00:06:16.620368 kubelet[2590]: I0913 00:06:16.619755 2590 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/24a2a3e4-99dd-4456-88c7-6b6cf936aa12-host-proc-sys-kernel\") pod \"cilium-lgwzj\" (UID: \"24a2a3e4-99dd-4456-88c7-6b6cf936aa12\") " pod="kube-system/cilium-lgwzj" Sep 13 00:06:16.620478 kubelet[2590]: I0913 00:06:16.619790 2590 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnjjp\" (UniqueName: \"kubernetes.io/projected/24a2a3e4-99dd-4456-88c7-6b6cf936aa12-kube-api-access-gnjjp\") pod \"cilium-lgwzj\" (UID: \"24a2a3e4-99dd-4456-88c7-6b6cf936aa12\") " pod="kube-system/cilium-lgwzj" Sep 13 00:06:16.620478 kubelet[2590]: I0913 00:06:16.619829 2590 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/24a2a3e4-99dd-4456-88c7-6b6cf936aa12-bpf-maps\") pod \"cilium-lgwzj\" (UID: \"24a2a3e4-99dd-4456-88c7-6b6cf936aa12\") " pod="kube-system/cilium-lgwzj" Sep 13 00:06:16.620478 kubelet[2590]: I0913 00:06:16.619866 2590 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/24a2a3e4-99dd-4456-88c7-6b6cf936aa12-host-proc-sys-net\") pod \"cilium-lgwzj\" (UID: \"24a2a3e4-99dd-4456-88c7-6b6cf936aa12\") " pod="kube-system/cilium-lgwzj" Sep 13 00:06:16.687992 systemd[1]: Started sshd@23-91.99.150.175:22-147.75.109.163:39456.service - OpenSSH per-connection server daemon (147.75.109.163:39456). Sep 13 00:06:16.735747 kubelet[2590]: E0913 00:06:16.735690 2590 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:06:17.695695 sshd[4344]: Accepted publickey for core from 147.75.109.163 port 39456 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:06:17.698489 sshd[4344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:17.707623 systemd-logind[1457]: New session 23 of user core. Sep 13 00:06:17.713862 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 13 00:06:17.722886 kubelet[2590]: E0913 00:06:17.722330 2590 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Sep 13 00:06:17.722886 kubelet[2590]: E0913 00:06:17.722374 2590 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-lgwzj: failed to sync secret cache: timed out waiting for the condition Sep 13 00:06:17.722886 kubelet[2590]: E0913 00:06:17.722456 2590 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/24a2a3e4-99dd-4456-88c7-6b6cf936aa12-hubble-tls podName:24a2a3e4-99dd-4456-88c7-6b6cf936aa12 nodeName:}" failed. No retries permitted until 2025-09-13 00:06:18.222431859 +0000 UTC m=+211.764417589 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/24a2a3e4-99dd-4456-88c7-6b6cf936aa12-hubble-tls") pod "cilium-lgwzj" (UID: "24a2a3e4-99dd-4456-88c7-6b6cf936aa12") : failed to sync secret cache: timed out waiting for the condition Sep 13 00:06:17.722886 kubelet[2590]: E0913 00:06:17.722776 2590 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Sep 13 00:06:17.722886 kubelet[2590]: E0913 00:06:17.722815 2590 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24a2a3e4-99dd-4456-88c7-6b6cf936aa12-clustermesh-secrets podName:24a2a3e4-99dd-4456-88c7-6b6cf936aa12 nodeName:}" failed. No retries permitted until 2025-09-13 00:06:18.222804663 +0000 UTC m=+211.764790393 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/24a2a3e4-99dd-4456-88c7-6b6cf936aa12-clustermesh-secrets") pod "cilium-lgwzj" (UID: "24a2a3e4-99dd-4456-88c7-6b6cf936aa12") : failed to sync secret cache: timed out waiting for the condition Sep 13 00:06:17.722886 kubelet[2590]: E0913 00:06:17.722837 2590 secret.go:189] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Sep 13 00:06:17.723512 kubelet[2590]: E0913 00:06:17.722859 2590 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24a2a3e4-99dd-4456-88c7-6b6cf936aa12-cilium-ipsec-secrets podName:24a2a3e4-99dd-4456-88c7-6b6cf936aa12 nodeName:}" failed. No retries permitted until 2025-09-13 00:06:18.222851143 +0000 UTC m=+211.764836873 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/24a2a3e4-99dd-4456-88c7-6b6cf936aa12-cilium-ipsec-secrets") pod "cilium-lgwzj" (UID: "24a2a3e4-99dd-4456-88c7-6b6cf936aa12") : failed to sync secret cache: timed out waiting for the condition Sep 13 00:06:18.384655 sshd[4344]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:18.390031 systemd-logind[1457]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:06:18.391284 systemd[1]: sshd@23-91.99.150.175:22-147.75.109.163:39456.service: Deactivated successfully. Sep 13 00:06:18.394578 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:06:18.397662 systemd-logind[1457]: Removed session 23. Sep 13 00:06:18.542865 containerd[1478]: time="2025-09-13T00:06:18.541486836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lgwzj,Uid:24a2a3e4-99dd-4456-88c7-6b6cf936aa12,Namespace:kube-system,Attempt:0,}" Sep 13 00:06:18.564933 systemd[1]: Started sshd@24-91.99.150.175:22-147.75.109.163:39462.service - OpenSSH per-connection server daemon (147.75.109.163:39462). Sep 13 00:06:18.590478 containerd[1478]: time="2025-09-13T00:06:18.590138790Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:06:18.590703 containerd[1478]: time="2025-09-13T00:06:18.590474233Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:06:18.590703 containerd[1478]: time="2025-09-13T00:06:18.590576954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:18.590997 containerd[1478]: time="2025-09-13T00:06:18.590924197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:18.624818 systemd[1]: Started cri-containerd-ce602d8729caac3bb5f514b4f56b89ef23c9b39de7e0c7d1ebb27e32b7fb0217.scope - libcontainer container ce602d8729caac3bb5f514b4f56b89ef23c9b39de7e0c7d1ebb27e32b7fb0217. Sep 13 00:06:18.656796 containerd[1478]: time="2025-09-13T00:06:18.656661064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lgwzj,Uid:24a2a3e4-99dd-4456-88c7-6b6cf936aa12,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce602d8729caac3bb5f514b4f56b89ef23c9b39de7e0c7d1ebb27e32b7fb0217\"" Sep 13 00:06:18.667024 containerd[1478]: time="2025-09-13T00:06:18.666926196Z" level=info msg="CreateContainer within sandbox \"ce602d8729caac3bb5f514b4f56b89ef23c9b39de7e0c7d1ebb27e32b7fb0217\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:06:18.682215 containerd[1478]: time="2025-09-13T00:06:18.682139492Z" level=info msg="CreateContainer within sandbox \"ce602d8729caac3bb5f514b4f56b89ef23c9b39de7e0c7d1ebb27e32b7fb0217\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c34cbdb25f5f6fa0f98d0b9a232d3238c1ff93a36b5fd8d4bcf0a61c8676cd3b\"" Sep 13 00:06:18.683053 containerd[1478]: time="2025-09-13T00:06:18.683006900Z" level=info msg="StartContainer for \"c34cbdb25f5f6fa0f98d0b9a232d3238c1ff93a36b5fd8d4bcf0a61c8676cd3b\"" Sep 13 00:06:18.712872 systemd[1]: Started cri-containerd-c34cbdb25f5f6fa0f98d0b9a232d3238c1ff93a36b5fd8d4bcf0a61c8676cd3b.scope - libcontainer container c34cbdb25f5f6fa0f98d0b9a232d3238c1ff93a36b5fd8d4bcf0a61c8676cd3b. Sep 13 00:06:18.742209 containerd[1478]: time="2025-09-13T00:06:18.742060267Z" level=info msg="StartContainer for \"c34cbdb25f5f6fa0f98d0b9a232d3238c1ff93a36b5fd8d4bcf0a61c8676cd3b\" returns successfully" Sep 13 00:06:18.762713 systemd[1]: cri-containerd-c34cbdb25f5f6fa0f98d0b9a232d3238c1ff93a36b5fd8d4bcf0a61c8676cd3b.scope: Deactivated successfully. Sep 13 00:06:18.810047 containerd[1478]: time="2025-09-13T00:06:18.809850872Z" level=info msg="shim disconnected" id=c34cbdb25f5f6fa0f98d0b9a232d3238c1ff93a36b5fd8d4bcf0a61c8676cd3b namespace=k8s.io Sep 13 00:06:18.810047 containerd[1478]: time="2025-09-13T00:06:18.809998474Z" level=warning msg="cleaning up after shim disconnected" id=c34cbdb25f5f6fa0f98d0b9a232d3238c1ff93a36b5fd8d4bcf0a61c8676cd3b namespace=k8s.io Sep 13 00:06:18.810047 containerd[1478]: time="2025-09-13T00:06:18.810015994Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:06:19.239416 systemd[1]: run-containerd-runc-k8s.io-ce602d8729caac3bb5f514b4f56b89ef23c9b39de7e0c7d1ebb27e32b7fb0217-runc.gWQqhX.mount: Deactivated successfully. Sep 13 00:06:19.278878 containerd[1478]: time="2025-09-13T00:06:19.278808598Z" level=info msg="CreateContainer within sandbox \"ce602d8729caac3bb5f514b4f56b89ef23c9b39de7e0c7d1ebb27e32b7fb0217\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:06:19.302610 containerd[1478]: time="2025-09-13T00:06:19.302513208Z" level=info msg="CreateContainer within sandbox \"ce602d8729caac3bb5f514b4f56b89ef23c9b39de7e0c7d1ebb27e32b7fb0217\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8b1f1f809d1352a7e95220b71db1ddd2e20e7857d3c56756fcaa4113e0c41150\"" Sep 13 00:06:19.304849 containerd[1478]: time="2025-09-13T00:06:19.304092622Z" level=info msg="StartContainer for \"8b1f1f809d1352a7e95220b71db1ddd2e20e7857d3c56756fcaa4113e0c41150\"" Sep 13 00:06:19.339871 systemd[1]: Started cri-containerd-8b1f1f809d1352a7e95220b71db1ddd2e20e7857d3c56756fcaa4113e0c41150.scope - libcontainer container 8b1f1f809d1352a7e95220b71db1ddd2e20e7857d3c56756fcaa4113e0c41150. Sep 13 00:06:19.385648 containerd[1478]: time="2025-09-13T00:06:19.385315701Z" level=info msg="StartContainer for \"8b1f1f809d1352a7e95220b71db1ddd2e20e7857d3c56756fcaa4113e0c41150\" returns successfully" Sep 13 00:06:19.396067 systemd[1]: cri-containerd-8b1f1f809d1352a7e95220b71db1ddd2e20e7857d3c56756fcaa4113e0c41150.scope: Deactivated successfully. Sep 13 00:06:19.424756 containerd[1478]: time="2025-09-13T00:06:19.424446927Z" level=info msg="shim disconnected" id=8b1f1f809d1352a7e95220b71db1ddd2e20e7857d3c56756fcaa4113e0c41150 namespace=k8s.io Sep 13 00:06:19.424756 containerd[1478]: time="2025-09-13T00:06:19.424524888Z" level=warning msg="cleaning up after shim disconnected" id=8b1f1f809d1352a7e95220b71db1ddd2e20e7857d3c56756fcaa4113e0c41150 namespace=k8s.io Sep 13 00:06:19.424756 containerd[1478]: time="2025-09-13T00:06:19.424621849Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:06:19.546266 sshd[4356]: Accepted publickey for core from 147.75.109.163 port 39462 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:06:19.549832 sshd[4356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:19.557886 systemd-logind[1457]: New session 24 of user core. Sep 13 00:06:19.567894 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 13 00:06:20.239289 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b1f1f809d1352a7e95220b71db1ddd2e20e7857d3c56756fcaa4113e0c41150-rootfs.mount: Deactivated successfully. Sep 13 00:06:20.283081 containerd[1478]: time="2025-09-13T00:06:20.283004783Z" level=info msg="CreateContainer within sandbox \"ce602d8729caac3bb5f514b4f56b89ef23c9b39de7e0c7d1ebb27e32b7fb0217\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:06:20.308767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1155090762.mount: Deactivated successfully. Sep 13 00:06:20.321681 containerd[1478]: time="2025-09-13T00:06:20.321607362Z" level=info msg="CreateContainer within sandbox \"ce602d8729caac3bb5f514b4f56b89ef23c9b39de7e0c7d1ebb27e32b7fb0217\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1c4a075e453de0f93ffddfd2fc6ec3a9c3457d5cab435406762a8d7e97cc5fa2\"" Sep 13 00:06:20.322754 containerd[1478]: time="2025-09-13T00:06:20.322713572Z" level=info msg="StartContainer for \"1c4a075e453de0f93ffddfd2fc6ec3a9c3457d5cab435406762a8d7e97cc5fa2\"" Sep 13 00:06:20.365836 systemd[1]: Started cri-containerd-1c4a075e453de0f93ffddfd2fc6ec3a9c3457d5cab435406762a8d7e97cc5fa2.scope - libcontainer container 1c4a075e453de0f93ffddfd2fc6ec3a9c3457d5cab435406762a8d7e97cc5fa2. Sep 13 00:06:20.407424 containerd[1478]: time="2025-09-13T00:06:20.407359834Z" level=info msg="StartContainer for \"1c4a075e453de0f93ffddfd2fc6ec3a9c3457d5cab435406762a8d7e97cc5fa2\" returns successfully" Sep 13 00:06:20.412248 systemd[1]: cri-containerd-1c4a075e453de0f93ffddfd2fc6ec3a9c3457d5cab435406762a8d7e97cc5fa2.scope: Deactivated successfully. Sep 13 00:06:20.443806 containerd[1478]: time="2025-09-13T00:06:20.443443191Z" level=info msg="shim disconnected" id=1c4a075e453de0f93ffddfd2fc6ec3a9c3457d5cab435406762a8d7e97cc5fa2 namespace=k8s.io Sep 13 00:06:20.443806 containerd[1478]: time="2025-09-13T00:06:20.443521311Z" level=warning msg="cleaning up after shim disconnected" id=1c4a075e453de0f93ffddfd2fc6ec3a9c3457d5cab435406762a8d7e97cc5fa2 namespace=k8s.io Sep 13 00:06:20.443806 containerd[1478]: time="2025-09-13T00:06:20.443571792Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:06:21.240371 systemd[1]: run-containerd-runc-k8s.io-1c4a075e453de0f93ffddfd2fc6ec3a9c3457d5cab435406762a8d7e97cc5fa2-runc.kqkXHI.mount: Deactivated successfully. Sep 13 00:06:21.240608 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c4a075e453de0f93ffddfd2fc6ec3a9c3457d5cab435406762a8d7e97cc5fa2-rootfs.mount: Deactivated successfully. Sep 13 00:06:21.295205 containerd[1478]: time="2025-09-13T00:06:21.295150640Z" level=info msg="CreateContainer within sandbox \"ce602d8729caac3bb5f514b4f56b89ef23c9b39de7e0c7d1ebb27e32b7fb0217\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:06:21.321469 containerd[1478]: time="2025-09-13T00:06:21.321307707Z" level=info msg="CreateContainer within sandbox \"ce602d8729caac3bb5f514b4f56b89ef23c9b39de7e0c7d1ebb27e32b7fb0217\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"93f15967774ae20bd61b97a3d98351c42d04eb16937b4928dabc3875f432fcec\"" Sep 13 00:06:21.322172 containerd[1478]: time="2025-09-13T00:06:21.322134474Z" level=info msg="StartContainer for \"93f15967774ae20bd61b97a3d98351c42d04eb16937b4928dabc3875f432fcec\"" Sep 13 00:06:21.357983 systemd[1]: Started cri-containerd-93f15967774ae20bd61b97a3d98351c42d04eb16937b4928dabc3875f432fcec.scope - libcontainer container 93f15967774ae20bd61b97a3d98351c42d04eb16937b4928dabc3875f432fcec. Sep 13 00:06:21.389053 systemd[1]: cri-containerd-93f15967774ae20bd61b97a3d98351c42d04eb16937b4928dabc3875f432fcec.scope: Deactivated successfully. Sep 13 00:06:21.394337 containerd[1478]: time="2025-09-13T00:06:21.393895858Z" level=info msg="StartContainer for \"93f15967774ae20bd61b97a3d98351c42d04eb16937b4928dabc3875f432fcec\" returns successfully" Sep 13 00:06:21.425692 containerd[1478]: time="2025-09-13T00:06:21.425619054Z" level=info msg="shim disconnected" id=93f15967774ae20bd61b97a3d98351c42d04eb16937b4928dabc3875f432fcec namespace=k8s.io Sep 13 00:06:21.425692 containerd[1478]: time="2025-09-13T00:06:21.425686575Z" level=warning msg="cleaning up after shim disconnected" id=93f15967774ae20bd61b97a3d98351c42d04eb16937b4928dabc3875f432fcec namespace=k8s.io Sep 13 00:06:21.425692 containerd[1478]: time="2025-09-13T00:06:21.425698375Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:06:21.507806 kubelet[2590]: I0913 00:06:21.507184 2590 setters.go:600] "Node became not ready" node="ci-4081-3-5-n-dc9d7711ed" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-13T00:06:21Z","lastTransitionTime":"2025-09-13T00:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 13 00:06:21.737285 kubelet[2590]: E0913 00:06:21.737015 2590 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:06:22.238752 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-93f15967774ae20bd61b97a3d98351c42d04eb16937b4928dabc3875f432fcec-rootfs.mount: Deactivated successfully. Sep 13 00:06:22.297511 containerd[1478]: time="2025-09-13T00:06:22.296622965Z" level=info msg="CreateContainer within sandbox \"ce602d8729caac3bb5f514b4f56b89ef23c9b39de7e0c7d1ebb27e32b7fb0217\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:06:22.320741 containerd[1478]: time="2025-09-13T00:06:22.320668813Z" level=info msg="CreateContainer within sandbox \"ce602d8729caac3bb5f514b4f56b89ef23c9b39de7e0c7d1ebb27e32b7fb0217\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"efdbc1cbd36f540f20377b72dd9af58226dcac980b90b52bf6a9cdd18c9bf2c1\"" Sep 13 00:06:22.322626 containerd[1478]: time="2025-09-13T00:06:22.321667581Z" level=info msg="StartContainer for \"efdbc1cbd36f540f20377b72dd9af58226dcac980b90b52bf6a9cdd18c9bf2c1\"" Sep 13 00:06:22.369835 systemd[1]: Started cri-containerd-efdbc1cbd36f540f20377b72dd9af58226dcac980b90b52bf6a9cdd18c9bf2c1.scope - libcontainer container efdbc1cbd36f540f20377b72dd9af58226dcac980b90b52bf6a9cdd18c9bf2c1. Sep 13 00:06:22.409748 containerd[1478]: time="2025-09-13T00:06:22.409685260Z" level=info msg="StartContainer for \"efdbc1cbd36f540f20377b72dd9af58226dcac980b90b52bf6a9cdd18c9bf2c1\" returns successfully" Sep 13 00:06:22.786714 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 13 00:06:26.218687 systemd-networkd[1377]: lxc_health: Link UP Sep 13 00:06:26.243973 systemd-networkd[1377]: lxc_health: Gained carrier Sep 13 00:06:26.610286 kubelet[2590]: I0913 00:06:26.610161 2590 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lgwzj" podStartSLOduration=10.610137406 podStartE2EDuration="10.610137406s" podCreationTimestamp="2025-09-13 00:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:06:23.335319014 +0000 UTC m=+216.877304744" watchObservedRunningTime="2025-09-13 00:06:26.610137406 +0000 UTC m=+220.152123136" Sep 13 00:06:26.677876 systemd[1]: run-containerd-runc-k8s.io-efdbc1cbd36f540f20377b72dd9af58226dcac980b90b52bf6a9cdd18c9bf2c1-runc.mvvekY.mount: Deactivated successfully. Sep 13 00:06:28.299982 systemd-networkd[1377]: lxc_health: Gained IPv6LL Sep 13 00:06:33.252062 systemd[1]: run-containerd-runc-k8s.io-efdbc1cbd36f540f20377b72dd9af58226dcac980b90b52bf6a9cdd18c9bf2c1-runc.P86p5y.mount: Deactivated successfully. Sep 13 00:06:33.474929 sshd[4356]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:33.482719 systemd[1]: sshd@24-91.99.150.175:22-147.75.109.163:39462.service: Deactivated successfully. Sep 13 00:06:33.486213 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:06:33.491327 systemd-logind[1457]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:06:33.493159 systemd-logind[1457]: Removed session 24. Sep 13 00:06:46.603868 containerd[1478]: time="2025-09-13T00:06:46.603651696Z" level=info msg="StopPodSandbox for \"b3da729a12d13119d38fee13c310415fefd1acba371871edee943161b3a408c7\"" Sep 13 00:06:46.604512 containerd[1478]: time="2025-09-13T00:06:46.604184819Z" level=info msg="TearDown network for sandbox \"b3da729a12d13119d38fee13c310415fefd1acba371871edee943161b3a408c7\" successfully" Sep 13 00:06:46.604512 containerd[1478]: time="2025-09-13T00:06:46.604224100Z" level=info msg="StopPodSandbox for \"b3da729a12d13119d38fee13c310415fefd1acba371871edee943161b3a408c7\" returns successfully" Sep 13 00:06:46.606133 containerd[1478]: time="2025-09-13T00:06:46.605273147Z" level=info msg="RemovePodSandbox for \"b3da729a12d13119d38fee13c310415fefd1acba371871edee943161b3a408c7\"" Sep 13 00:06:46.606133 containerd[1478]: time="2025-09-13T00:06:46.605318947Z" level=info msg="Forcibly stopping sandbox \"b3da729a12d13119d38fee13c310415fefd1acba371871edee943161b3a408c7\"" Sep 13 00:06:46.606133 containerd[1478]: time="2025-09-13T00:06:46.605388428Z" level=info msg="TearDown network for sandbox \"b3da729a12d13119d38fee13c310415fefd1acba371871edee943161b3a408c7\" successfully" Sep 13 00:06:46.650351 containerd[1478]: time="2025-09-13T00:06:46.650285507Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b3da729a12d13119d38fee13c310415fefd1acba371871edee943161b3a408c7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:06:46.651776 containerd[1478]: time="2025-09-13T00:06:46.651713878Z" level=info msg="RemovePodSandbox \"b3da729a12d13119d38fee13c310415fefd1acba371871edee943161b3a408c7\" returns successfully" Sep 13 00:06:46.659948 containerd[1478]: time="2025-09-13T00:06:46.659865456Z" level=info msg="StopPodSandbox for \"8e42e88f6b1df1318ffa5be2f9a6326c7e525bef6765c7cb2deee26f357edce3\"" Sep 13 00:06:46.660116 containerd[1478]: time="2025-09-13T00:06:46.660085457Z" level=info msg="TearDown network for sandbox \"8e42e88f6b1df1318ffa5be2f9a6326c7e525bef6765c7cb2deee26f357edce3\" successfully" Sep 13 00:06:46.660144 containerd[1478]: time="2025-09-13T00:06:46.660118017Z" level=info msg="StopPodSandbox for \"8e42e88f6b1df1318ffa5be2f9a6326c7e525bef6765c7cb2deee26f357edce3\" returns successfully" Sep 13 00:06:46.662254 containerd[1478]: time="2025-09-13T00:06:46.660629181Z" level=info msg="RemovePodSandbox for \"8e42e88f6b1df1318ffa5be2f9a6326c7e525bef6765c7cb2deee26f357edce3\"" Sep 13 00:06:46.662254 containerd[1478]: time="2025-09-13T00:06:46.660665181Z" level=info msg="Forcibly stopping sandbox \"8e42e88f6b1df1318ffa5be2f9a6326c7e525bef6765c7cb2deee26f357edce3\"" Sep 13 00:06:46.662254 containerd[1478]: time="2025-09-13T00:06:46.660724422Z" level=info msg="TearDown network for sandbox \"8e42e88f6b1df1318ffa5be2f9a6326c7e525bef6765c7cb2deee26f357edce3\" successfully" Sep 13 00:06:46.689142 containerd[1478]: time="2025-09-13T00:06:46.689079944Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8e42e88f6b1df1318ffa5be2f9a6326c7e525bef6765c7cb2deee26f357edce3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:06:46.689444 containerd[1478]: time="2025-09-13T00:06:46.689422546Z" level=info msg="RemovePodSandbox \"8e42e88f6b1df1318ffa5be2f9a6326c7e525bef6765c7cb2deee26f357edce3\" returns successfully" Sep 13 00:06:48.932780 systemd[1]: cri-containerd-9edbb78cf5f27ec30f6d332fc63659841762ffb65c290ffe26a69a72ea875690.scope: Deactivated successfully. Sep 13 00:06:48.933303 systemd[1]: cri-containerd-9edbb78cf5f27ec30f6d332fc63659841762ffb65c290ffe26a69a72ea875690.scope: Consumed 4.515s CPU time, 19.4M memory peak, 0B memory swap peak. Sep 13 00:06:48.962114 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9edbb78cf5f27ec30f6d332fc63659841762ffb65c290ffe26a69a72ea875690-rootfs.mount: Deactivated successfully. Sep 13 00:06:48.972594 containerd[1478]: time="2025-09-13T00:06:48.972488367Z" level=info msg="shim disconnected" id=9edbb78cf5f27ec30f6d332fc63659841762ffb65c290ffe26a69a72ea875690 namespace=k8s.io Sep 13 00:06:48.972594 containerd[1478]: time="2025-09-13T00:06:48.972576007Z" level=warning msg="cleaning up after shim disconnected" id=9edbb78cf5f27ec30f6d332fc63659841762ffb65c290ffe26a69a72ea875690 namespace=k8s.io Sep 13 00:06:48.972594 containerd[1478]: time="2025-09-13T00:06:48.972586567Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:06:49.168841 kubelet[2590]: E0913 00:06:49.167515 2590 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:36040->10.0.0.2:2379: read: connection timed out" Sep 13 00:06:49.397037 kubelet[2590]: I0913 00:06:49.395695 2590 scope.go:117] "RemoveContainer" containerID="9edbb78cf5f27ec30f6d332fc63659841762ffb65c290ffe26a69a72ea875690" Sep 13 00:06:49.401612 containerd[1478]: time="2025-09-13T00:06:49.401500357Z" level=info msg="CreateContainer within sandbox \"6c7d14df2c425a69ef7734f9dbf22d9803f51d24e1834a84689a8952cc9c20d5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 13 00:06:49.424031 containerd[1478]: time="2025-09-13T00:06:49.423979834Z" level=info msg="CreateContainer within sandbox \"6c7d14df2c425a69ef7734f9dbf22d9803f51d24e1834a84689a8952cc9c20d5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"0a367f93e4dedbc86641c47cea94ee1e7cb508a7d018c8fd45719f0388900c3b\"" Sep 13 00:06:49.425714 containerd[1478]: time="2025-09-13T00:06:49.425666645Z" level=info msg="StartContainer for \"0a367f93e4dedbc86641c47cea94ee1e7cb508a7d018c8fd45719f0388900c3b\"" Sep 13 00:06:49.477043 systemd[1]: Started cri-containerd-0a367f93e4dedbc86641c47cea94ee1e7cb508a7d018c8fd45719f0388900c3b.scope - libcontainer container 0a367f93e4dedbc86641c47cea94ee1e7cb508a7d018c8fd45719f0388900c3b. Sep 13 00:06:49.519887 containerd[1478]: time="2025-09-13T00:06:49.519832541Z" level=info msg="StartContainer for \"0a367f93e4dedbc86641c47cea94ee1e7cb508a7d018c8fd45719f0388900c3b\" returns successfully" Sep 13 00:06:49.965380 systemd[1]: run-containerd-runc-k8s.io-0a367f93e4dedbc86641c47cea94ee1e7cb508a7d018c8fd45719f0388900c3b-runc.q0GEVb.mount: Deactivated successfully. Sep 13 00:06:53.070211 kubelet[2590]: E0913 00:06:53.070065 2590 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:35826->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-5-n-dc9d7711ed.1864aed93c7997cd kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-5-n-dc9d7711ed,UID:523c0dc250fd7264727c483a4de83a13,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-n-dc9d7711ed,},FirstTimestamp:2025-09-13 00:06:42.627073997 +0000 UTC m=+236.169059767,LastTimestamp:2025-09-13 00:06:42.627073997 +0000 UTC m=+236.169059767,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-n-dc9d7711ed,}"